Test Report: KVM_Linux_crio 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-27:36389
                    
                

Test fail (30/317)

Order failed test Duration
33 TestAddons/parallel/Registry 74.23
34 TestAddons/parallel/Ingress 149.54
36 TestAddons/parallel/MetricsServer 321.94
163 TestMultiControlPlane/serial/StopSecondaryNode 141.4
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.56
165 TestMultiControlPlane/serial/RestartSecondaryNode 6.43
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 359.58
170 TestMultiControlPlane/serial/StopCluster 141.74
230 TestMultiNode/serial/RestartKeepsNodes 326.24
232 TestMultiNode/serial/StopMultiNode 144.81
239 TestPreload 275.97
247 TestKubernetesUpgrade 372.25
283 TestStartStop/group/old-k8s-version/serial/FirstStart 285.19
297 TestStartStop/group/no-preload/serial/Stop 139.2
302 TestStartStop/group/embed-certs/serial/Stop 139.04
305 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.01
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
307 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 91.44
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
314 TestStartStop/group/old-k8s-version/serial/SecondStart 716.05
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.34
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.38
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.43
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.51
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 488.74
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 354.05
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 464.88
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 160.35
x
+
TestAddons/parallel/Registry (74.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.978863ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-kdt5f" [652ee744-ff06-40fe-a66f-aabff5476e31] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003421573s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2rlvs" [5080c804-a6a8-4239-bd3f-a89d8f114f0c] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006578849s
addons_test.go:338: (dbg) Run:  kubectl --context addons-364775 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-364775 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-364775 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.081088712s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-364775 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 ip
2024/09/27 00:27:15 [DEBUG] GET http://192.168.39.169:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-364775 -n addons-364775
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 logs -n 25: (1.460585367s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-603097 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-603097              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-603097              | download-only-603097 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| start   | -o=json --download-only              | download-only-528649 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | -p download-only-528649              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-528649              | download-only-528649 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-603097              | download-only-603097 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-528649              | download-only-528649 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| start   | --download-only -p                   | binary-mirror-381196 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | binary-mirror-381196                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32921               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-381196              | binary-mirror-381196 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| addons  | enable dashboard -p                  | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | addons-364775                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | addons-364775                        |                      |         |         |                     |                     |
	| start   | -p addons-364775 --wait=true         | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:18 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | -p addons-364775                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable         | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | addons-364775                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | -p addons-364775                     |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable         | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-364775 addons                 | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:27 UTC |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ssh     | addons-364775 ssh curl -s            | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| addons  | addons-364775 addons                 | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-364775 ip                     | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	| addons  | addons-364775 addons disable         | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:15:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:15:44.537636   22923 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:15:44.537740   22923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:44.537749   22923 out.go:358] Setting ErrFile to fd 2...
	I0927 00:15:44.537753   22923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:44.537907   22923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:15:44.538451   22923 out.go:352] Setting JSON to false
	I0927 00:15:44.539227   22923 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3490,"bootTime":1727392655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:15:44.539333   22923 start.go:139] virtualization: kvm guest
	I0927 00:15:44.541421   22923 out.go:177] * [addons-364775] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:15:44.542612   22923 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:15:44.542608   22923 notify.go:220] Checking for updates...
	I0927 00:15:44.544937   22923 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:15:44.546076   22923 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:15:44.547130   22923 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:44.548170   22923 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:15:44.549152   22923 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:15:44.550537   22923 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:15:44.580671   22923 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 00:15:44.581804   22923 start.go:297] selected driver: kvm2
	I0927 00:15:44.581814   22923 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:15:44.581825   22923 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:15:44.582527   22923 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:44.582595   22923 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:15:44.596734   22923 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:15:44.596791   22923 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:15:44.597022   22923 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:15:44.597049   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:15:44.597085   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:15:44.597092   22923 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:15:44.597139   22923 start.go:340] cluster config:
	{Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:44.597233   22923 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:44.598769   22923 out.go:177] * Starting "addons-364775" primary control-plane node in "addons-364775" cluster
	I0927 00:15:44.599805   22923 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:15:44.599844   22923 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:15:44.599854   22923 cache.go:56] Caching tarball of preloaded images
	I0927 00:15:44.599915   22923 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:15:44.599926   22923 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:15:44.600208   22923 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json ...
	I0927 00:15:44.600224   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json: {Name:mk7d83f0775700fae5c444ee1119498cda71b7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:44.600357   22923 start.go:360] acquireMachinesLock for addons-364775: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:15:44.600399   22923 start.go:364] duration metric: took 29.224µs to acquireMachinesLock for "addons-364775"
	I0927 00:15:44.600416   22923 start.go:93] Provisioning new machine with config: &{Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:15:44.600461   22923 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 00:15:44.602317   22923 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0927 00:15:44.602440   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:15:44.602479   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:44.616122   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0927 00:15:44.616559   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:44.617071   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:15:44.617091   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:44.617371   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:44.617525   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:15:44.617640   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:15:44.617745   22923 start.go:159] libmachine.API.Create for "addons-364775" (driver="kvm2")
	I0927 00:15:44.617772   22923 client.go:168] LocalClient.Create starting
	I0927 00:15:44.617816   22923 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:15:44.773115   22923 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:15:45.021396   22923 main.go:141] libmachine: Running pre-create checks...
	I0927 00:15:45.021422   22923 main.go:141] libmachine: (addons-364775) Calling .PreCreateCheck
	I0927 00:15:45.021848   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:15:45.022228   22923 main.go:141] libmachine: Creating machine...
	I0927 00:15:45.022241   22923 main.go:141] libmachine: (addons-364775) Calling .Create
	I0927 00:15:45.022354   22923 main.go:141] libmachine: (addons-364775) Creating KVM machine...
	I0927 00:15:45.023487   22923 main.go:141] libmachine: (addons-364775) DBG | found existing default KVM network
	I0927 00:15:45.024131   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.024009   22945 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0927 00:15:45.024171   22923 main.go:141] libmachine: (addons-364775) DBG | created network xml: 
	I0927 00:15:45.024195   22923 main.go:141] libmachine: (addons-364775) DBG | <network>
	I0927 00:15:45.024208   22923 main.go:141] libmachine: (addons-364775) DBG |   <name>mk-addons-364775</name>
	I0927 00:15:45.024226   22923 main.go:141] libmachine: (addons-364775) DBG |   <dns enable='no'/>
	I0927 00:15:45.024270   22923 main.go:141] libmachine: (addons-364775) DBG |   
	I0927 00:15:45.024294   22923 main.go:141] libmachine: (addons-364775) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 00:15:45.024303   22923 main.go:141] libmachine: (addons-364775) DBG |     <dhcp>
	I0927 00:15:45.024311   22923 main.go:141] libmachine: (addons-364775) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 00:15:45.024318   22923 main.go:141] libmachine: (addons-364775) DBG |     </dhcp>
	I0927 00:15:45.024325   22923 main.go:141] libmachine: (addons-364775) DBG |   </ip>
	I0927 00:15:45.024331   22923 main.go:141] libmachine: (addons-364775) DBG |   
	I0927 00:15:45.024337   22923 main.go:141] libmachine: (addons-364775) DBG | </network>
	I0927 00:15:45.024345   22923 main.go:141] libmachine: (addons-364775) DBG | 
	I0927 00:15:45.029333   22923 main.go:141] libmachine: (addons-364775) DBG | trying to create private KVM network mk-addons-364775 192.168.39.0/24...
	I0927 00:15:45.091813   22923 main.go:141] libmachine: (addons-364775) DBG | private KVM network mk-addons-364775 192.168.39.0/24 created
	I0927 00:15:45.091853   22923 main.go:141] libmachine: (addons-364775) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 ...
	I0927 00:15:45.091879   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.091772   22945 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:45.091922   22923 main.go:141] libmachine: (addons-364775) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:15:45.091959   22923 main.go:141] libmachine: (addons-364775) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:15:45.348792   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.348685   22945 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa...
	I0927 00:15:45.574205   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.574081   22945 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/addons-364775.rawdisk...
	I0927 00:15:45.574239   22923 main.go:141] libmachine: (addons-364775) DBG | Writing magic tar header
	I0927 00:15:45.574255   22923 main.go:141] libmachine: (addons-364775) DBG | Writing SSH key tar header
	I0927 00:15:45.574273   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.574195   22945 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 ...
	I0927 00:15:45.574290   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775
	I0927 00:15:45.574318   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:15:45.574327   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:45.574338   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:15:45.574351   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:15:45.574364   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 (perms=drwx------)
	I0927 00:15:45.574372   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:15:45.574384   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home
	I0927 00:15:45.574390   22923 main.go:141] libmachine: (addons-364775) DBG | Skipping /home - not owner
	I0927 00:15:45.574400   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:15:45.574428   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:15:45.574447   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:15:45.574477   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:15:45.574496   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:15:45.574506   22923 main.go:141] libmachine: (addons-364775) Creating domain...
	I0927 00:15:45.575497   22923 main.go:141] libmachine: (addons-364775) define libvirt domain using xml: 
	I0927 00:15:45.575515   22923 main.go:141] libmachine: (addons-364775) <domain type='kvm'>
	I0927 00:15:45.575525   22923 main.go:141] libmachine: (addons-364775)   <name>addons-364775</name>
	I0927 00:15:45.575532   22923 main.go:141] libmachine: (addons-364775)   <memory unit='MiB'>4000</memory>
	I0927 00:15:45.575541   22923 main.go:141] libmachine: (addons-364775)   <vcpu>2</vcpu>
	I0927 00:15:45.575545   22923 main.go:141] libmachine: (addons-364775)   <features>
	I0927 00:15:45.575552   22923 main.go:141] libmachine: (addons-364775)     <acpi/>
	I0927 00:15:45.575556   22923 main.go:141] libmachine: (addons-364775)     <apic/>
	I0927 00:15:45.575560   22923 main.go:141] libmachine: (addons-364775)     <pae/>
	I0927 00:15:45.575566   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.575571   22923 main.go:141] libmachine: (addons-364775)   </features>
	I0927 00:15:45.575576   22923 main.go:141] libmachine: (addons-364775)   <cpu mode='host-passthrough'>
	I0927 00:15:45.575582   22923 main.go:141] libmachine: (addons-364775)   
	I0927 00:15:45.575591   22923 main.go:141] libmachine: (addons-364775)   </cpu>
	I0927 00:15:45.575601   22923 main.go:141] libmachine: (addons-364775)   <os>
	I0927 00:15:45.575614   22923 main.go:141] libmachine: (addons-364775)     <type>hvm</type>
	I0927 00:15:45.575634   22923 main.go:141] libmachine: (addons-364775)     <boot dev='cdrom'/>
	I0927 00:15:45.575652   22923 main.go:141] libmachine: (addons-364775)     <boot dev='hd'/>
	I0927 00:15:45.575681   22923 main.go:141] libmachine: (addons-364775)     <bootmenu enable='no'/>
	I0927 00:15:45.575702   22923 main.go:141] libmachine: (addons-364775)   </os>
	I0927 00:15:45.575714   22923 main.go:141] libmachine: (addons-364775)   <devices>
	I0927 00:15:45.575723   22923 main.go:141] libmachine: (addons-364775)     <disk type='file' device='cdrom'>
	I0927 00:15:45.575750   22923 main.go:141] libmachine: (addons-364775)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/boot2docker.iso'/>
	I0927 00:15:45.575762   22923 main.go:141] libmachine: (addons-364775)       <target dev='hdc' bus='scsi'/>
	I0927 00:15:45.575772   22923 main.go:141] libmachine: (addons-364775)       <readonly/>
	I0927 00:15:45.575786   22923 main.go:141] libmachine: (addons-364775)     </disk>
	I0927 00:15:45.575799   22923 main.go:141] libmachine: (addons-364775)     <disk type='file' device='disk'>
	I0927 00:15:45.575811   22923 main.go:141] libmachine: (addons-364775)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:15:45.575825   22923 main.go:141] libmachine: (addons-364775)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/addons-364775.rawdisk'/>
	I0927 00:15:45.575836   22923 main.go:141] libmachine: (addons-364775)       <target dev='hda' bus='virtio'/>
	I0927 00:15:45.575845   22923 main.go:141] libmachine: (addons-364775)     </disk>
	I0927 00:15:45.575855   22923 main.go:141] libmachine: (addons-364775)     <interface type='network'>
	I0927 00:15:45.575866   22923 main.go:141] libmachine: (addons-364775)       <source network='mk-addons-364775'/>
	I0927 00:15:45.575877   22923 main.go:141] libmachine: (addons-364775)       <model type='virtio'/>
	I0927 00:15:45.575888   22923 main.go:141] libmachine: (addons-364775)     </interface>
	I0927 00:15:45.575896   22923 main.go:141] libmachine: (addons-364775)     <interface type='network'>
	I0927 00:15:45.575909   22923 main.go:141] libmachine: (addons-364775)       <source network='default'/>
	I0927 00:15:45.575924   22923 main.go:141] libmachine: (addons-364775)       <model type='virtio'/>
	I0927 00:15:45.575936   22923 main.go:141] libmachine: (addons-364775)     </interface>
	I0927 00:15:45.575946   22923 main.go:141] libmachine: (addons-364775)     <serial type='pty'>
	I0927 00:15:45.575957   22923 main.go:141] libmachine: (addons-364775)       <target port='0'/>
	I0927 00:15:45.575966   22923 main.go:141] libmachine: (addons-364775)     </serial>
	I0927 00:15:45.575977   22923 main.go:141] libmachine: (addons-364775)     <console type='pty'>
	I0927 00:15:45.575996   22923 main.go:141] libmachine: (addons-364775)       <target type='serial' port='0'/>
	I0927 00:15:45.576007   22923 main.go:141] libmachine: (addons-364775)     </console>
	I0927 00:15:45.576016   22923 main.go:141] libmachine: (addons-364775)     <rng model='virtio'>
	I0927 00:15:45.576028   22923 main.go:141] libmachine: (addons-364775)       <backend model='random'>/dev/random</backend>
	I0927 00:15:45.576035   22923 main.go:141] libmachine: (addons-364775)     </rng>
	I0927 00:15:45.576045   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.576056   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.576064   22923 main.go:141] libmachine: (addons-364775)   </devices>
	I0927 00:15:45.576075   22923 main.go:141] libmachine: (addons-364775) </domain>
	I0927 00:15:45.576084   22923 main.go:141] libmachine: (addons-364775) 
	I0927 00:15:45.581822   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:be:33:ab in network default
	I0927 00:15:45.582377   22923 main.go:141] libmachine: (addons-364775) Ensuring networks are active...
	I0927 00:15:45.582391   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:45.583142   22923 main.go:141] libmachine: (addons-364775) Ensuring network default is active
	I0927 00:15:45.583582   22923 main.go:141] libmachine: (addons-364775) Ensuring network mk-addons-364775 is active
	I0927 00:15:45.584264   22923 main.go:141] libmachine: (addons-364775) Getting domain xml...
	I0927 00:15:45.585015   22923 main.go:141] libmachine: (addons-364775) Creating domain...
	I0927 00:15:46.949358   22923 main.go:141] libmachine: (addons-364775) Waiting to get IP...
	I0927 00:15:46.950076   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:46.950580   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:46.950607   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:46.950544   22945 retry.go:31] will retry after 202.642864ms: waiting for machine to come up
	I0927 00:15:47.155069   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.155563   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.155584   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.155427   22945 retry.go:31] will retry after 370.186358ms: waiting for machine to come up
	I0927 00:15:47.526779   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.527165   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.527193   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.527118   22945 retry.go:31] will retry after 435.004567ms: waiting for machine to come up
	I0927 00:15:47.963669   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.964030   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.964059   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.963977   22945 retry.go:31] will retry after 546.011839ms: waiting for machine to come up
	I0927 00:15:48.511601   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:48.512026   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:48.512071   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:48.511990   22945 retry.go:31] will retry after 469.054965ms: waiting for machine to come up
	I0927 00:15:48.982621   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:48.982989   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:48.983018   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:48.982935   22945 retry.go:31] will retry after 651.072969ms: waiting for machine to come up
	I0927 00:15:49.635407   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:49.635833   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:49.635868   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:49.635780   22945 retry.go:31] will retry after 787.572834ms: waiting for machine to come up
	I0927 00:15:50.425318   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:50.425646   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:50.425674   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:50.425607   22945 retry.go:31] will retry after 1.14927096s: waiting for machine to come up
	I0927 00:15:51.576285   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:51.576584   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:51.576610   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:51.576552   22945 retry.go:31] will retry after 1.476584274s: waiting for machine to come up
	I0927 00:15:53.055137   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:53.055575   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:53.055599   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:53.055538   22945 retry.go:31] will retry after 1.729538445s: waiting for machine to come up
	I0927 00:15:54.786058   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:54.786491   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:54.786519   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:54.786450   22945 retry.go:31] will retry after 2.631307121s: waiting for machine to come up
	I0927 00:15:57.421088   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:57.421427   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:57.421454   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:57.421379   22945 retry.go:31] will retry after 2.652911492s: waiting for machine to come up
	I0927 00:16:00.075506   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:00.075951   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:16:00.075981   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:16:00.075893   22945 retry.go:31] will retry after 3.30922874s: waiting for machine to come up
	I0927 00:16:03.388283   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:03.388607   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:16:03.388628   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:16:03.388576   22945 retry.go:31] will retry after 3.510064019s: waiting for machine to come up
	I0927 00:16:06.901968   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.902384   22923 main.go:141] libmachine: (addons-364775) Found IP for machine: 192.168.39.169
	I0927 00:16:06.902410   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has current primary IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.902418   22923 main.go:141] libmachine: (addons-364775) Reserving static IP address...
	I0927 00:16:06.902791   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find host DHCP lease matching {name: "addons-364775", mac: "52:54:00:e5:bb:bf", ip: "192.168.39.169"} in network mk-addons-364775
	I0927 00:16:06.970142   22923 main.go:141] libmachine: (addons-364775) Reserved static IP address: 192.168.39.169
	I0927 00:16:06.970170   22923 main.go:141] libmachine: (addons-364775) Waiting for SSH to be available...
	I0927 00:16:06.970179   22923 main.go:141] libmachine: (addons-364775) DBG | Getting to WaitForSSH function...
	I0927 00:16:06.972291   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.972697   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:06.972723   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.972887   22923 main.go:141] libmachine: (addons-364775) DBG | Using SSH client type: external
	I0927 00:16:06.972906   22923 main.go:141] libmachine: (addons-364775) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa (-rw-------)
	I0927 00:16:06.972933   22923 main.go:141] libmachine: (addons-364775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:16:06.972951   22923 main.go:141] libmachine: (addons-364775) DBG | About to run SSH command:
	I0927 00:16:06.972962   22923 main.go:141] libmachine: (addons-364775) DBG | exit 0
	I0927 00:16:07.103385   22923 main.go:141] libmachine: (addons-364775) DBG | SSH cmd err, output: <nil>: 
	I0927 00:16:07.103681   22923 main.go:141] libmachine: (addons-364775) KVM machine creation complete!
	I0927 00:16:07.103911   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:16:07.104438   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:07.104611   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:07.104753   22923 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:16:07.104765   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:07.105844   22923 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:16:07.105857   22923 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:16:07.105862   22923 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:16:07.105867   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.107901   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.108215   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.108246   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.108338   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.108493   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.108634   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.108761   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.108901   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.109070   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.109080   22923 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:16:07.218435   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:16:07.218469   22923 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:16:07.218478   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.221204   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.221494   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.221517   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.221683   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.221860   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.222017   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.222134   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.222276   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.222428   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.222439   22923 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:16:07.332074   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:16:07.332151   22923 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:16:07.332158   22923 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:16:07.332165   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.332377   22923 buildroot.go:166] provisioning hostname "addons-364775"
	I0927 00:16:07.332406   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.332594   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.334888   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.335193   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.335220   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.335325   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.335483   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.335621   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.335776   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.335956   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.336121   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.336143   22923 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-364775 && echo "addons-364775" | sudo tee /etc/hostname
	I0927 00:16:07.457193   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-364775
	
	I0927 00:16:07.457219   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.459657   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.459964   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.459992   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.460170   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.460303   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.460415   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.460529   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.460689   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.460874   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.460892   22923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-364775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-364775/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-364775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:16:07.576205   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:16:07.576252   22923 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:16:07.576312   22923 buildroot.go:174] setting up certificates
	I0927 00:16:07.576329   22923 provision.go:84] configureAuth start
	I0927 00:16:07.576347   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.576623   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:07.579617   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.579974   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.580000   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.580131   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.582401   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.582745   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.582770   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.582903   22923 provision.go:143] copyHostCerts
	I0927 00:16:07.582979   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:16:07.583120   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:16:07.583203   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:16:07.583299   22923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.addons-364775 san=[127.0.0.1 192.168.39.169 addons-364775 localhost minikube]
	I0927 00:16:07.704457   22923 provision.go:177] copyRemoteCerts
	I0927 00:16:07.704522   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:16:07.704551   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.707097   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.707455   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.707485   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.707628   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.707808   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.707921   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.708037   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:07.793441   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:16:07.816635   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:16:07.839412   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:16:07.861848   22923 provision.go:87] duration metric: took 285.503545ms to configureAuth
	I0927 00:16:07.861873   22923 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:16:07.862050   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:07.862134   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.864754   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.865082   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.865107   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.865293   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.865475   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.865626   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.865739   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.865871   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.866074   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.866090   22923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:16:08.093802   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:16:08.093837   22923 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:16:08.093848   22923 main.go:141] libmachine: (addons-364775) Calling .GetURL
	I0927 00:16:08.095002   22923 main.go:141] libmachine: (addons-364775) DBG | Using libvirt version 6000000
	I0927 00:16:08.097051   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.097385   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.097422   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.097515   22923 main.go:141] libmachine: Docker is up and running!
	I0927 00:16:08.097527   22923 main.go:141] libmachine: Reticulating splines...
	I0927 00:16:08.097535   22923 client.go:171] duration metric: took 23.479752106s to LocalClient.Create
	I0927 00:16:08.097566   22923 start.go:167] duration metric: took 23.479821174s to libmachine.API.Create "addons-364775"
	I0927 00:16:08.097589   22923 start.go:293] postStartSetup for "addons-364775" (driver="kvm2")
	I0927 00:16:08.097606   22923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:16:08.097627   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.097833   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:16:08.097854   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.099703   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.099981   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.100006   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.100126   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.100298   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.100435   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.100561   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.186017   22923 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:16:08.190011   22923 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:16:08.190031   22923 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:16:08.190101   22923 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:16:08.190129   22923 start.go:296] duration metric: took 92.527439ms for postStartSetup
	I0927 00:16:08.190155   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:16:08.190759   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:08.193058   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.193355   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.193381   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.193557   22923 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json ...
	I0927 00:16:08.193708   22923 start.go:128] duration metric: took 23.593238722s to createHost
	I0927 00:16:08.193728   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.195773   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.196120   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.196166   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.196300   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.196468   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.196582   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.196721   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.196856   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:08.197036   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:08.197048   22923 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:16:08.303996   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727396168.279190965
	
	I0927 00:16:08.304020   22923 fix.go:216] guest clock: 1727396168.279190965
	I0927 00:16:08.304027   22923 fix.go:229] Guest: 2024-09-27 00:16:08.279190965 +0000 UTC Remote: 2024-09-27 00:16:08.193719171 +0000 UTC m=+23.688310296 (delta=85.471794ms)
	I0927 00:16:08.304044   22923 fix.go:200] guest clock delta is within tolerance: 85.471794ms
	I0927 00:16:08.304048   22923 start.go:83] releasing machines lock for "addons-364775", held for 23.703640756s
	I0927 00:16:08.304069   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.304317   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:08.306988   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.307381   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.307407   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.307561   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.307997   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.308150   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.308237   22923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:16:08.308288   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.308351   22923 ssh_runner.go:195] Run: cat /version.json
	I0927 00:16:08.308378   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.310668   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.310969   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.310997   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.311014   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.311153   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.311324   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.311389   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.311408   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.311461   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.311590   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.311614   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.311722   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.311824   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.311953   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.388567   22923 ssh_runner.go:195] Run: systemctl --version
	I0927 00:16:08.413004   22923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:16:08.574576   22923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:16:08.581322   22923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:16:08.581391   22923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:16:08.597487   22923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:16:08.597509   22923 start.go:495] detecting cgroup driver to use...
	I0927 00:16:08.597566   22923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:16:08.612247   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:16:08.625077   22923 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:16:08.625130   22923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:16:08.637473   22923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:16:08.650051   22923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:16:08.758188   22923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:16:08.913236   22923 docker.go:233] disabling docker service ...
	I0927 00:16:08.913320   22923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:16:08.927426   22923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:16:08.940272   22923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:16:09.057168   22923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:16:09.169370   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:16:09.184123   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:16:09.202228   22923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:16:09.202290   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.212677   22923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:16:09.212740   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.223105   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.233431   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.243818   22923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:16:09.254480   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.265026   22923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.282615   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.293542   22923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:16:09.303356   22923 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:16:09.303424   22923 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:16:09.315981   22923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:16:09.325606   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:09.439247   22923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:16:09.527367   22923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:16:09.527468   22923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:16:09.532165   22923 start.go:563] Will wait 60s for crictl version
	I0927 00:16:09.532216   22923 ssh_runner.go:195] Run: which crictl
	I0927 00:16:09.535820   22923 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:16:09.572264   22923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:16:09.572401   22923 ssh_runner.go:195] Run: crio --version
	I0927 00:16:09.599589   22923 ssh_runner.go:195] Run: crio --version
	I0927 00:16:09.627068   22923 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:16:09.628232   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:09.630667   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:09.630995   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:09.631023   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:09.631180   22923 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:16:09.635187   22923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:16:09.647618   22923 kubeadm.go:883] updating cluster {Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:16:09.647751   22923 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:16:09.647799   22923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:16:09.680511   22923 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 00:16:09.680588   22923 ssh_runner.go:195] Run: which lz4
	I0927 00:16:09.684511   22923 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 00:16:09.688651   22923 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 00:16:09.688692   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 00:16:10.959682   22923 crio.go:462] duration metric: took 1.275200656s to copy over tarball
	I0927 00:16:10.959746   22923 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 00:16:13.025278   22923 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.065510814s)
	I0927 00:16:13.025311   22923 crio.go:469] duration metric: took 2.065601709s to extract the tarball
	I0927 00:16:13.025322   22923 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 00:16:13.061932   22923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:16:13.107912   22923 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:16:13.107939   22923 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:16:13.107947   22923 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.1 crio true true} ...
	I0927 00:16:13.108033   22923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-364775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:16:13.108095   22923 ssh_runner.go:195] Run: crio config
	I0927 00:16:13.153533   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:16:13.153555   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:16:13.153566   22923 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:16:13.153586   22923 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-364775 NodeName:addons-364775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:16:13.153691   22923 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-364775"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:16:13.153746   22923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:16:13.163635   22923 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:16:13.163702   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:16:13.172959   22923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 00:16:13.190510   22923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:16:13.207214   22923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0927 00:16:13.224712   22923 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I0927 00:16:13.228436   22923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:16:13.241465   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:13.367179   22923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:16:13.383473   22923 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775 for IP: 192.168.39.169
	I0927 00:16:13.383499   22923 certs.go:194] generating shared ca certs ...
	I0927 00:16:13.383515   22923 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.383652   22923 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:16:13.575678   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt ...
	I0927 00:16:13.575704   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt: {Name:mk3ad08ac2703aff467792f34abbf756e11c2872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.575901   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key ...
	I0927 00:16:13.575916   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key: {Name:mkab43d698e5658555844624b3079e901a8ded86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.576010   22923 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:16:13.751373   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt ...
	I0927 00:16:13.751404   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt: {Name:mk8e225d38c1311b0e8a7348aa1fbee6e6fcbd70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.751579   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key ...
	I0927 00:16:13.751594   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key: {Name:mk81ac2481482dece22299e0ff67c97675fb9f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.751685   22923 certs.go:256] generating profile certs ...
	I0927 00:16:13.751745   22923 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key
	I0927 00:16:13.751759   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt with IP's: []
	I0927 00:16:13.996696   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt ...
	I0927 00:16:13.996728   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: {Name:mk4647826e81f09b562e4b6468be9da247fcab9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.996908   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key ...
	I0927 00:16:13.996922   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key: {Name:mkdba807b5f103e151ba37e1747e2a749b1980c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.997015   22923 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee
	I0927 00:16:13.997035   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.169]
	I0927 00:16:14.144098   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee ...
	I0927 00:16:14.144127   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee: {Name:mkf743df3d4ae64c9bb8f8a6ebe4e814cf609961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.144305   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee ...
	I0927 00:16:14.144321   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee: {Name:mk43e7a262458556d97385e524b4828b4b015bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.144397   22923 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt
	I0927 00:16:14.144467   22923 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key
	I0927 00:16:14.144516   22923 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key
	I0927 00:16:14.144533   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt with IP's: []
	I0927 00:16:14.217209   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt ...
	I0927 00:16:14.217236   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt: {Name:mk44b3f8e9e129ec5865925167df941ba0f63291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.217379   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key ...
	I0927 00:16:14.217389   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key: {Name:mkc2dd610a10002245981e0f1a9de7854a330937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.217536   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:16:14.217567   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:16:14.217589   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:16:14.217611   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:16:14.218138   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:16:14.245205   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:16:14.273590   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:16:14.299930   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:16:14.322526   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:16:14.345010   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:16:14.368388   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:16:14.391414   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:16:14.413864   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:16:14.435858   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:16:14.451548   22923 ssh_runner.go:195] Run: openssl version
	I0927 00:16:14.457242   22923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:16:14.467943   22923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.472191   22923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.472238   22923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.477640   22923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:16:14.488010   22923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:16:14.491811   22923 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:16:14.491855   22923 kubeadm.go:392] StartCluster: {Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:16:14.491924   22923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:16:14.491960   22923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:16:14.524680   22923 cri.go:89] found id: ""
	I0927 00:16:14.524743   22923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:16:14.534145   22923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:16:14.545428   22923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:16:14.556318   22923 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:16:14.556338   22923 kubeadm.go:157] found existing configuration files:
	
	I0927 00:16:14.556375   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:16:14.566224   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:16:14.566269   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:16:14.576303   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:16:14.585129   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:16:14.585171   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:16:14.594747   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:16:14.603457   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:16:14.603496   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:16:14.612663   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:16:14.621624   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:16:14.621668   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:16:14.631182   22923 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 00:16:14.689680   22923 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:16:14.689907   22923 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:16:14.787642   22923 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:16:14.787844   22923 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:16:14.787981   22923 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:16:14.796210   22923 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:16:14.933571   22923 out.go:235]   - Generating certificates and keys ...
	I0927 00:16:14.933713   22923 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:16:14.933803   22923 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:16:14.933906   22923 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:16:15.129675   22923 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:16:15.193399   22923 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:16:15.313134   22923 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:16:15.654187   22923 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:16:15.654296   22923 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-364775 localhost] and IPs [192.168.39.169 127.0.0.1 ::1]
	I0927 00:16:15.765696   22923 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:16:15.765874   22923 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-364775 localhost] and IPs [192.168.39.169 127.0.0.1 ::1]
	I0927 00:16:16.013868   22923 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:16:16.165681   22923 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:16:16.447703   22923 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:16:16.447794   22923 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:16:16.592680   22923 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:16:16.720016   22923 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:16:16.929585   22923 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:16:17.262835   22923 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:16:17.402806   22923 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:16:17.403246   22923 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:16:17.407265   22923 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:16:17.409098   22923 out.go:235]   - Booting up control plane ...
	I0927 00:16:17.409215   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:16:17.409290   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:16:17.410016   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:16:17.425105   22923 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:16:17.433605   22923 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:16:17.433674   22923 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:16:17.565381   22923 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:16:17.565569   22923 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:16:19.065179   22923 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501169114s
	I0927 00:16:19.065301   22923 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:16:24.064418   22923 kubeadm.go:310] [api-check] The API server is healthy after 5.001577374s
	I0927 00:16:24.076690   22923 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:16:24.099966   22923 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:16:24.127484   22923 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:16:24.127678   22923 kubeadm.go:310] [mark-control-plane] Marking the node addons-364775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:16:24.140308   22923 kubeadm.go:310] [bootstrap-token] Using token: pa4b34.sdki52w2nqhs0c2a
	I0927 00:16:24.141673   22923 out.go:235]   - Configuring RBAC rules ...
	I0927 00:16:24.141825   22923 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:16:24.147166   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:16:24.155898   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:16:24.161743   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:16:24.165824   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:16:24.168837   22923 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:16:24.472788   22923 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:16:24.898245   22923 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:16:25.470513   22923 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:16:25.471447   22923 kubeadm.go:310] 
	I0927 00:16:25.471556   22923 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:16:25.471575   22923 kubeadm.go:310] 
	I0927 00:16:25.471666   22923 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:16:25.471676   22923 kubeadm.go:310] 
	I0927 00:16:25.471699   22923 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:16:25.471877   22923 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:16:25.471929   22923 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:16:25.471935   22923 kubeadm.go:310] 
	I0927 00:16:25.471976   22923 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:16:25.471982   22923 kubeadm.go:310] 
	I0927 00:16:25.472038   22923 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:16:25.472051   22923 kubeadm.go:310] 
	I0927 00:16:25.472141   22923 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:16:25.472326   22923 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:16:25.472450   22923 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:16:25.472464   22923 kubeadm.go:310] 
	I0927 00:16:25.472573   22923 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:16:25.472648   22923 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:16:25.472666   22923 kubeadm.go:310] 
	I0927 00:16:25.472805   22923 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pa4b34.sdki52w2nqhs0c2a \
	I0927 00:16:25.472942   22923 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 00:16:25.472971   22923 kubeadm.go:310] 	--control-plane 
	I0927 00:16:25.472980   22923 kubeadm.go:310] 
	I0927 00:16:25.473098   22923 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:16:25.473107   22923 kubeadm.go:310] 
	I0927 00:16:25.473226   22923 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pa4b34.sdki52w2nqhs0c2a \
	I0927 00:16:25.473365   22923 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 00:16:25.474005   22923 kubeadm.go:310] W0927 00:16:14.668581     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:16:25.474358   22923 kubeadm.go:310] W0927 00:16:14.670545     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:16:25.474505   22923 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:16:25.474538   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:16:25.474550   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:16:25.476900   22923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 00:16:25.477915   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 00:16:25.488407   22923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 00:16:25.508648   22923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:16:25.508704   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:25.508750   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-364775 minikube.k8s.io/updated_at=2024_09_27T00_16_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-364775 minikube.k8s.io/primary=true
	I0927 00:16:25.526229   22923 ops.go:34] apiserver oom_adj: -16
	I0927 00:16:25.629503   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:26.130228   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:26.629915   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:27.130024   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:27.630537   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:28.130314   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:28.630463   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:29.130429   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:29.630477   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:30.129687   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:30.257341   22923 kubeadm.go:1113] duration metric: took 4.748689071s to wait for elevateKubeSystemPrivileges
	I0927 00:16:30.257376   22923 kubeadm.go:394] duration metric: took 15.765523535s to StartCluster
	I0927 00:16:30.257393   22923 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:30.257497   22923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:16:30.257927   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:30.258123   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:16:30.258153   22923 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:16:30.258207   22923 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:16:30.258332   22923 addons.go:69] Setting yakd=true in profile "addons-364775"
	I0927 00:16:30.258343   22923 addons.go:69] Setting metrics-server=true in profile "addons-364775"
	I0927 00:16:30.258356   22923 addons.go:234] Setting addon yakd=true in "addons-364775"
	I0927 00:16:30.258357   22923 addons.go:69] Setting storage-provisioner=true in profile "addons-364775"
	I0927 00:16:30.258336   22923 addons.go:69] Setting cloud-spanner=true in profile "addons-364775"
	I0927 00:16:30.258373   22923 addons.go:234] Setting addon storage-provisioner=true in "addons-364775"
	I0927 00:16:30.258378   22923 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-364775"
	I0927 00:16:30.258389   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258398   22923 addons.go:69] Setting ingress=true in profile "addons-364775"
	I0927 00:16:30.258398   22923 addons.go:69] Setting default-storageclass=true in profile "addons-364775"
	I0927 00:16:30.258418   22923 addons.go:69] Setting registry=true in profile "addons-364775"
	I0927 00:16:30.258421   22923 addons.go:69] Setting ingress-dns=true in profile "addons-364775"
	I0927 00:16:30.258424   22923 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-364775"
	I0927 00:16:30.258430   22923 addons.go:234] Setting addon registry=true in "addons-364775"
	I0927 00:16:30.258431   22923 addons.go:234] Setting addon ingress-dns=true in "addons-364775"
	I0927 00:16:30.258439   22923 addons.go:69] Setting volcano=true in profile "addons-364775"
	I0927 00:16:30.258444   22923 addons.go:69] Setting inspektor-gadget=true in profile "addons-364775"
	I0927 00:16:30.258449   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258453   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258461   22923 addons.go:234] Setting addon inspektor-gadget=true in "addons-364775"
	I0927 00:16:30.258460   22923 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-364775"
	I0927 00:16:30.258465   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258475   22923 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-364775"
	I0927 00:16:30.258499   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258390   22923 addons.go:234] Setting addon cloud-spanner=true in "addons-364775"
	I0927 00:16:30.258875   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258880   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258887   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258897   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258901   22923 addons.go:69] Setting volumesnapshots=true in profile "addons-364775"
	I0927 00:16:30.258904   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258890   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258911   22923 addons.go:234] Setting addon volumesnapshots=true in "addons-364775"
	I0927 00:16:30.258428   22923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-364775"
	I0927 00:16:30.258921   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258928   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258400   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.259165   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:30.259243   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259268   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258361   22923 addons.go:234] Setting addon metrics-server=true in "addons-364775"
	I0927 00:16:30.259320   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258902   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259250   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259345   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259362   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258453   22923 addons.go:234] Setting addon volcano=true in "addons-364775"
	I0927 00:16:30.258410   22923 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-364775"
	I0927 00:16:30.258413   22923 addons.go:234] Setting addon ingress=true in "addons-364775"
	I0927 00:16:30.259414   22923 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-364775"
	I0927 00:16:30.259433   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258910   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259599   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259320   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259681   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259711   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259756   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259785   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258365   22923 addons.go:69] Setting gcp-auth=true in profile "addons-364775"
	I0927 00:16:30.259994   22923 mustload.go:65] Loading cluster: addons-364775
	I0927 00:16:30.258890   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.260064   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259324   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.260148   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.260171   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:30.260185   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.259686   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.260638   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.261042   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.261076   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.261496   22923 out.go:177] * Verifying Kubernetes components...
	I0927 00:16:30.263120   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:30.279959   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33325
	I0927 00:16:30.280209   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0927 00:16:30.280226   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0927 00:16:30.280238   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0927 00:16:30.280556   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.280907   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281016   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281058   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281074   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281083   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281341   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281358   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281459   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.281511   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281523   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281582   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281595   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281715   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.281946   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.281986   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.282089   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.282113   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.282682   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.282737   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.295448   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
	I0927 00:16:30.295465   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0927 00:16:30.295466   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.295577   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0927 00:16:30.295763   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.295797   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.295961   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.295991   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.296110   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.296144   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.297516   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.297610   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.297662   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.298165   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298183   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298204   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298220   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298319   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298333   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298708   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.298770   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.299230   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.299374   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.299396   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.299799   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.299837   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.320873   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0927 00:16:30.321467   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.322017   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.322035   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.322375   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.322557   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.324241   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.325648   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0927 00:16:30.326669   22923 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:16:30.328052   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:16:30.328068   22923 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:16:30.328087   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.330977   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.331478   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.331497   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.331615   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.331743   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0927 00:16:30.331928   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.331988   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.332189   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.332466   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.332484   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.332544   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.332815   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.333331   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.333369   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.333610   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0927 00:16:30.334115   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.334676   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.334692   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.334922   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0927 00:16:30.335061   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.335224   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.335329   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.335871   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.335915   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.337783   22923 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-364775"
	I0927 00:16:30.337824   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.338180   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.338211   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.341852   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38853
	I0927 00:16:30.341872   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.341955   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.341960   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0927 00:16:30.341962   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.341971   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.342027   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0927 00:16:30.342336   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.342379   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.342477   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343236   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343339   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34911
	I0927 00:16:30.343344   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.343360   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.343418   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343490   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.343875   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.343889   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.344011   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.344032   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.344084   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.344875   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0927 00:16:30.344918   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.344961   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.344877   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.345471   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.345494   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.345704   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.345804   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.345923   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.345934   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346060   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.346070   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346180   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.346193   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346254   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346296   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346481   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346533   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.346738   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346786   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.346944   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.347106   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.347423   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.347470   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.347990   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.348013   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.348711   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.348979   22923 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:16:30.350262   22923 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:16:30.350711   22923 addons.go:234] Setting addon default-storageclass=true in "addons-364775"
	I0927 00:16:30.350752   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.351080   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.351116   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.351946   22923 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:16:30.351964   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:16:30.351981   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.352035   22923 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:16:30.353597   22923 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:16:30.353615   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:16:30.353635   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.354349   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0927 00:16:30.354872   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.355446   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.355462   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.355832   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.356428   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.356465   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.356580   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.357770   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.357938   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.357955   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.358350   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.358661   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0927 00:16:30.358801   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.358854   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.358868   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.359073   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.359151   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.359281   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.359652   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.359671   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.359714   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.360052   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.360131   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.360290   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.360338   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.360850   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.361885   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.364200   22923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:16:30.365464   22923 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:16:30.365488   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:16:30.365507   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.366308   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0927 00:16:30.366791   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.367379   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.367401   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.367750   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.367938   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.369060   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.369690   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.369710   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.370066   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.370129   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.370398   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.370694   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.370823   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.371120   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0927 00:16:30.371610   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.372218   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.372236   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.372530   22923 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:16:30.373808   22923 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:16:30.373825   22923 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:16:30.373842   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.373856   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I0927 00:16:30.374333   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.374903   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.374922   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.375279   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.375482   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.376723   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.377131   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.377149   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.377335   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.377382   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.377887   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.378054   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0927 00:16:30.378172   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.378338   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.378377   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.378649   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.378704   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.379547   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:16:30.379740   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.379756   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.380077   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.380239   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.380301   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.380762   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:16:30.380786   22923 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:16:30.380803   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.382146   22923 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:16:30.383631   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I0927 00:16:30.383639   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.383821   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:30.383832   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:30.383878   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.383963   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.384125   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.384142   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.384162   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:30.384182   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:30.384189   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:30.384196   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:30.384202   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:30.384331   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.384494   22923 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:16:30.384504   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:16:30.384518   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.384569   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:30.384585   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:30.384591   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	W0927 00:16:30.384653   22923 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0927 00:16:30.384914   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.385028   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.385170   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.385526   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.385545   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.386176   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.386427   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.388475   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0927 00:16:30.388774   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.389050   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.389164   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.389180   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.389505   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.389567   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.389583   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.390108   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.390148   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.390345   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.390534   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.390650   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.390712   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.390753   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.392001   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38025
	I0927 00:16:30.392318   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.392824   22923 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:16:30.392877   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.392887   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.393218   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.393656   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.393690   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.395193   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:16:30.395209   22923 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:16:30.395225   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.396435   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0927 00:16:30.396951   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.397552   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.397567   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.397947   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.398120   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.398753   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.399064   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.399083   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.399238   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.399500   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.399555   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
	I0927 00:16:30.399676   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.399899   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.400083   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.400154   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.400820   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.400837   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.401205   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.401221   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:16:30.401414   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.403906   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:30.404106   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34281
	I0927 00:16:30.404221   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.404635   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.404663   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
	I0927 00:16:30.405161   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.405182   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.405366   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.405583   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.405846   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.405996   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.406014   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.406064   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:16:30.406314   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.406621   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.406763   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:30.407772   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:16:30.408030   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.408199   22923 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:16:30.408220   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:16:30.408236   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.409701   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:16:30.409716   22923 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:16:30.411228   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:16:30.411370   22923 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:16:30.411387   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:16:30.411406   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.411488   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.411504   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.411531   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.411552   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.411643   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.411769   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.411918   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.413714   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:16:30.414595   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.415012   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.415065   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.415352   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.415527   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.415645   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.415756   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.416372   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:16:30.417721   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:16:30.418988   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:16:30.420195   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:16:30.420214   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:16:30.420244   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.422864   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0927 00:16:30.423200   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.423340   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.423691   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.423710   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.423879   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.424016   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.424026   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.424200   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.424330   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.424366   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.424489   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.424704   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.424757   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I0927 00:16:30.425411   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.425899   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.425917   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.426087   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.426195   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.426431   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.427706   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.427748   22923 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:16:30.427917   22923 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:16:30.427928   22923 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:16:30.427942   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.430541   22923 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:16:30.431106   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.431591   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.431613   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.431738   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.431872   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.431985   22923 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:16:30.431995   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:16:30.432008   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.432009   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.432127   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	W0927 00:16:30.434191   22923 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47452->192.168.39.169:22: read: connection reset by peer
	I0927 00:16:30.434217   22923 retry.go:31] will retry after 235.279035ms: ssh: handshake failed: read tcp 192.168.39.1:47452->192.168.39.169:22: read: connection reset by peer
	I0927 00:16:30.434586   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.435008   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.435093   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.435225   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.435381   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.435528   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.435630   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.687382   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:16:30.687407   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:16:30.703808   22923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:16:30.703964   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:16:30.766082   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:16:30.766106   22923 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:16:30.789375   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:16:30.789397   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:16:30.817986   22923 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:16:30.818010   22923 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:16:30.818453   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:16:30.818687   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:16:30.820723   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:16:30.820738   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:16:30.838202   22923 node_ready.go:35] waiting up to 6m0s for node "addons-364775" to be "Ready" ...
	I0927 00:16:30.841116   22923 node_ready.go:49] node "addons-364775" has status "Ready":"True"
	I0927 00:16:30.841135   22923 node_ready.go:38] duration metric: took 2.9055ms for node "addons-364775" to be "Ready" ...
	I0927 00:16:30.841142   22923 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:16:30.845387   22923 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:16:30.845426   22923 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:16:30.848404   22923 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:30.890816   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:16:30.919824   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:16:30.919846   22923 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:16:30.923045   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:16:30.930150   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:16:30.969174   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:16:30.986771   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:16:30.986796   22923 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:16:31.024820   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:16:31.024848   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:16:31.048974   22923 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:16:31.048999   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:16:31.060405   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:16:31.060436   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:16:31.087170   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:16:31.097441   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:16:31.097468   22923 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:16:31.123704   22923 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:16:31.123728   22923 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:16:31.127243   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:16:31.127257   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:16:31.181768   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:16:31.181799   22923 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:16:31.198013   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:16:31.198040   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:16:31.230188   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:16:31.240969   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:16:31.337457   22923 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:16:31.337486   22923 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:16:31.340360   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:16:31.340378   22923 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:16:31.357490   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:16:31.357519   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:16:31.438275   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:16:31.438302   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:16:31.479034   22923 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:31.479054   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:16:31.506932   22923 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:16:31.506952   22923 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:16:31.551476   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:16:31.551508   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:16:31.628698   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:16:31.817687   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:31.844064   22923 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:16:31.844092   22923 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:16:32.141105   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:16:32.141141   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:16:32.314746   22923 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:16:32.314778   22923 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:16:32.430650   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:16:32.430679   22923 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:16:32.500643   22923 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:16:32.500669   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:16:32.618286   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:16:32.618306   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:16:32.776416   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:16:32.854014   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:32.980645   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:16:32.980665   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:16:32.984476   22923 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.280478347s)
	I0927 00:16:32.984507   22923 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 00:16:33.214546   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.396058946s)
	I0927 00:16:33.214590   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:33.214603   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:33.214847   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:33.214864   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:33.214872   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:33.214879   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:33.215068   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:33.215082   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:33.399888   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:16:33.399914   22923 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:16:33.488059   22923 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-364775" context rescaled to 1 replicas
	I0927 00:16:33.660690   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:16:35.195794   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:36.275637   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.456917037s)
	I0927 00:16:36.275696   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.275710   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.275974   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:36.275983   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.275997   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:36.276006   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.276024   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.276207   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.276219   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:36.365139   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.365161   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.365407   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:36.365451   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.365468   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.395951   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:37.431653   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:16:37.431693   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:37.434730   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.435197   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:37.435228   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.435424   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:37.435670   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:37.435829   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:37.436039   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:37.781071   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:16:37.864137   22923 addons.go:234] Setting addon gcp-auth=true in "addons-364775"
	I0927 00:16:37.864191   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:37.864599   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:37.864634   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:37.880453   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0927 00:16:37.881363   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:37.881837   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:37.881864   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:37.882238   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:37.882781   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:37.882817   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:37.897834   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0927 00:16:37.898272   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:37.898755   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:37.898780   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:37.899107   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:37.899270   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:37.900885   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:37.901107   22923 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:16:37.901127   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:37.903699   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.904060   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:37.904077   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.904235   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:37.904402   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:37.904533   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:37.904663   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:37.975730   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.084875146s)
	I0927 00:16:37.975779   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975780   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.05270093s)
	I0927 00:16:37.975818   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975836   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975874   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.006677684s)
	I0927 00:16:37.975909   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975920   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975923   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.888722405s)
	I0927 00:16:37.975952   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975969   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975818   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.045636678s)
	I0927 00:16:37.975983   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.745766683s)
	I0927 00:16:37.975995   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976002   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976007   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975792   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976021   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976074   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.735080002s)
	I0927 00:16:37.976097   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976107   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976192   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.347467613s)
	I0927 00:16:37.976207   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976215   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976527   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976558   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976566   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976571   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976580   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976582   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976587   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976601   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976608   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976615   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976613   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976622   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976647   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976654   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976663   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976666   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976672   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976684   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976691   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976698   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976704   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976738   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976754   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976760   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976797   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976808   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976838   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976846   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976854   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976859   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976875   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976886   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976894   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976901   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.977215   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.977240   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.977246   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.977255   22923 addons.go:475] Verifying addon ingress=true in "addons-364775"
	I0927 00:16:37.977437   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.977460   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.977465   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978150   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978180   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978194   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978192   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.978204   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978219   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.978225   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978361   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.979355   22923 out.go:177] * Verifying ingress addon...
	I0927 00:16:37.979514   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.979525   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.979768   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.979826   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.979832   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.979841   22923 addons.go:475] Verifying addon metrics-server=true in "addons-364775"
	I0927 00:16:37.980269   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980280   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.980288   22923 addons.go:475] Verifying addon registry=true in "addons-364775"
	I0927 00:16:37.980455   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980746   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.980760   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.980768   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.980971   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980987   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.981985   22923 out.go:177] * Verifying registry addon...
	I0927 00:16:37.981995   22923 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-364775 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:16:37.982403   22923 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:16:37.983991   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:16:38.027140   22923 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:16:38.027164   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:38.027861   22923 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:16:38.027884   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.131340   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.131369   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.131619   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.131639   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:38.551465   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:38.551901   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.905728   22923 pod_ready.go:93] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:38.905752   22923 pod_ready.go:82] duration metric: took 8.057329101s for pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:38.905762   22923 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:38.947750   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.130011838s)
	W0927 00:16:38.947809   22923 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:38.947833   22923 retry.go:31] will retry after 183.128394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:38.947854   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.171400863s)
	I0927 00:16:38.947898   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.947923   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.948190   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.948207   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:38.948218   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.948225   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.948480   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.948512   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.000059   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:39.000476   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.132046   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:39.490374   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:39.492989   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.801849   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.14111498s)
	I0927 00:16:39.801914   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:39.801915   22923 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.900787405s)
	I0927 00:16:39.801927   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:39.802242   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:39.802285   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.802305   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:39.802318   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:39.802316   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:39.802555   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:39.802569   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.802579   22923 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-364775"
	I0927 00:16:39.803411   22923 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:16:39.804344   22923 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:16:39.806163   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:39.806896   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:16:39.807410   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:16:39.807425   22923 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:16:39.870942   22923 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:16:39.870973   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:39.953858   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:16:39.953888   22923 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:16:39.987421   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.990568   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:40.013239   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:40.013265   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:16:40.054642   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:40.311779   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.487458   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:40.488947   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:40.708018   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.575916247s)
	I0927 00:16:40.708075   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:40.708093   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:40.708329   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:40.708410   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:40.708424   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:40.708437   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:40.708458   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:40.708681   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:40.708717   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:40.812167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.918341   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:41.015974   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.018484   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:41.070353   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.015656922s)
	I0927 00:16:41.070410   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:41.070421   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:41.070658   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:41.070675   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:41.070686   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:41.070694   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:41.070909   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:41.070942   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:41.072773   22923 addons.go:475] Verifying addon gcp-auth=true in "addons-364775"
	I0927 00:16:41.074260   22923 out.go:177] * Verifying gcp-auth addon...
	I0927 00:16:41.077101   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:16:41.089006   22923 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:16:41.089060   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:41.319255   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:41.489602   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.493367   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:41.589417   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:41.824980   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.009117   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.009383   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.097507   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:42.313572   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.412928   22923 pod_ready.go:98] pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.169 HostIPs:[{IP:192.168.39
.169}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:16:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:16:35 +0000 UTC,FinishedAt:2024-09-27 00:16:41 +0000 UTC,ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf Started:0xc0022776f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000a82370} {Name:kube-api-access-c6xps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000a82380}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:16:42.412956   22923 pod_ready.go:82] duration metric: took 3.507186728s for pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace to be "Ready" ...
	E0927 00:16:42.412968   22923 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.169 HostIPs:[{IP:192.168.39.169}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:16:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:16:35 +0000 UTC,FinishedAt:2024-09-27 00:16:41 +0000 UTC,ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf Started:0xc0022776f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000a82370} {Name:kube-api-access-c6xps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc000a82380}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:16:42.412977   22923 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.419963   22923 pod_ready.go:93] pod "etcd-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.419981   22923 pod_ready.go:82] duration metric: took 6.997345ms for pod "etcd-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.419989   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.437266   22923 pod_ready.go:93] pod "kube-apiserver-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.437286   22923 pod_ready.go:82] duration metric: took 17.290515ms for pod "kube-apiserver-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.437295   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.456989   22923 pod_ready.go:93] pod "kube-controller-manager-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.457011   22923 pod_ready.go:82] duration metric: took 19.710449ms for pod "kube-controller-manager-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.457022   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vj2cl" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.463096   22923 pod_ready.go:93] pod "kube-proxy-vj2cl" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.463112   22923 pod_ready.go:82] duration metric: took 6.084237ms for pod "kube-proxy-vj2cl" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.463120   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.487973   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.488283   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.581218   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:42.810423   22923 pod_ready.go:93] pod "kube-scheduler-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.810447   22923 pod_ready.go:82] duration metric: took 347.321728ms for pod "kube-scheduler-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.810454   22923 pod_ready.go:39] duration metric: took 11.969303463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:16:42.810469   22923 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:16:42.810514   22923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:16:42.814099   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.827884   22923 api_server.go:72] duration metric: took 12.569706035s to wait for apiserver process to appear ...
	I0927 00:16:42.827902   22923 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:16:42.827918   22923 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0927 00:16:42.835431   22923 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I0927 00:16:42.837096   22923 api_server.go:141] control plane version: v1.31.1
	I0927 00:16:42.837111   22923 api_server.go:131] duration metric: took 9.203783ms to wait for apiserver health ...
	I0927 00:16:42.837119   22923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:16:42.988500   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.988911   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.087346   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:43.092767   22923 system_pods.go:59] 17 kube-system pods found
	I0927 00:16:43.092791   22923 system_pods.go:61] "coredns-7c65d6cfc9-gd2h2" [4a9f1c5a-89df-497e-a9fa-4a5d427542c0] Running
	I0927 00:16:43.092800   22923 system_pods.go:61] "csi-hostpath-attacher-0" [c4a5feee-cdbf-4a8f-9ab2-d1e28526dc7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:43.092807   22923 system_pods.go:61] "csi-hostpath-resizer-0" [a9b843e4-fb3e-491a-90a1-05337ec1be6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:43.092815   22923 system_pods.go:61] "csi-hostpathplugin-5jvjw" [86b14d99-6d05-417f-834c-06b97d3ff358] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:43.092819   22923 system_pods.go:61] "etcd-addons-364775" [c4a11540-824b-46eb-b5ff-16761d78090b] Running
	I0927 00:16:43.092823   22923 system_pods.go:61] "kube-apiserver-addons-364775" [a34af223-8b21-4d2e-acc8-f35f72a84d89] Running
	I0927 00:16:43.092827   22923 system_pods.go:61] "kube-controller-manager-addons-364775" [d41167fe-9862-4644-a4a2-5891b829c263] Running
	I0927 00:16:43.092833   22923 system_pods.go:61] "kube-ingress-dns-minikube" [8bb056cc-4ad8-48da-bad9-aec78168a573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 00:16:43.092836   22923 system_pods.go:61] "kube-proxy-vj2cl" [f2579736-b094-4822-82ce-2ce53d815d92] Running
	I0927 00:16:43.092840   22923 system_pods.go:61] "kube-scheduler-addons-364775" [87532128-92ea-4e82-8f4b-e05bba39380d] Running
	I0927 00:16:43.092849   22923 system_pods.go:61] "metrics-server-84c5f94fbc-h74zz" [1ee23e82-6d41-48b5-a303-16f6ebd60172] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:43.092855   22923 system_pods.go:61] "nvidia-device-plugin-daemonset-gvjn8" [2de30fac-4d6c-4922-b784-e9801df8f16a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:16:43.092862   22923 system_pods.go:61] "registry-66c9cd494c-kdt5f" [652ee744-ff06-40fe-a66f-aabff5476e31] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:16:43.092867   22923 system_pods.go:61] "registry-proxy-2rlvs" [5080c804-a6a8-4239-bd3f-a89d8f114f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:16:43.092875   22923 system_pods.go:61] "snapshot-controller-56fcc65765-b777z" [beb5ceb2-51fe-49bc-842c-800de73b7628] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.092880   22923 system_pods.go:61] "snapshot-controller-56fcc65765-s5z9r" [ba81ccfa-12e1-42cd-a9f0-d1cbff990eb6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.092888   22923 system_pods.go:61] "storage-provisioner" [b2787e80-d152-46a1-9672-af83ebbb8e9d] Running
	I0927 00:16:43.092895   22923 system_pods.go:74] duration metric: took 255.770173ms to wait for pod list to return data ...
	I0927 00:16:43.092901   22923 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:16:43.209797   22923 default_sa.go:45] found service account: "default"
	I0927 00:16:43.209820   22923 default_sa.go:55] duration metric: took 116.910938ms for default service account to be created ...
	I0927 00:16:43.209828   22923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:16:43.311723   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.415743   22923 system_pods.go:86] 17 kube-system pods found
	I0927 00:16:43.415771   22923 system_pods.go:89] "coredns-7c65d6cfc9-gd2h2" [4a9f1c5a-89df-497e-a9fa-4a5d427542c0] Running
	I0927 00:16:43.415779   22923 system_pods.go:89] "csi-hostpath-attacher-0" [c4a5feee-cdbf-4a8f-9ab2-d1e28526dc7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:43.415785   22923 system_pods.go:89] "csi-hostpath-resizer-0" [a9b843e4-fb3e-491a-90a1-05337ec1be6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:43.415793   22923 system_pods.go:89] "csi-hostpathplugin-5jvjw" [86b14d99-6d05-417f-834c-06b97d3ff358] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:43.415798   22923 system_pods.go:89] "etcd-addons-364775" [c4a11540-824b-46eb-b5ff-16761d78090b] Running
	I0927 00:16:43.415803   22923 system_pods.go:89] "kube-apiserver-addons-364775" [a34af223-8b21-4d2e-acc8-f35f72a84d89] Running
	I0927 00:16:43.415807   22923 system_pods.go:89] "kube-controller-manager-addons-364775" [d41167fe-9862-4644-a4a2-5891b829c263] Running
	I0927 00:16:43.415813   22923 system_pods.go:89] "kube-ingress-dns-minikube" [8bb056cc-4ad8-48da-bad9-aec78168a573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 00:16:43.415817   22923 system_pods.go:89] "kube-proxy-vj2cl" [f2579736-b094-4822-82ce-2ce53d815d92] Running
	I0927 00:16:43.415824   22923 system_pods.go:89] "kube-scheduler-addons-364775" [87532128-92ea-4e82-8f4b-e05bba39380d] Running
	I0927 00:16:43.415829   22923 system_pods.go:89] "metrics-server-84c5f94fbc-h74zz" [1ee23e82-6d41-48b5-a303-16f6ebd60172] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:43.415837   22923 system_pods.go:89] "nvidia-device-plugin-daemonset-gvjn8" [2de30fac-4d6c-4922-b784-e9801df8f16a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:16:43.415842   22923 system_pods.go:89] "registry-66c9cd494c-kdt5f" [652ee744-ff06-40fe-a66f-aabff5476e31] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:16:43.415848   22923 system_pods.go:89] "registry-proxy-2rlvs" [5080c804-a6a8-4239-bd3f-a89d8f114f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:16:43.415853   22923 system_pods.go:89] "snapshot-controller-56fcc65765-b777z" [beb5ceb2-51fe-49bc-842c-800de73b7628] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.415859   22923 system_pods.go:89] "snapshot-controller-56fcc65765-s5z9r" [ba81ccfa-12e1-42cd-a9f0-d1cbff990eb6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.415864   22923 system_pods.go:89] "storage-provisioner" [b2787e80-d152-46a1-9672-af83ebbb8e9d] Running
	I0927 00:16:43.415873   22923 system_pods.go:126] duration metric: took 206.040673ms to wait for k8s-apps to be running ...
	I0927 00:16:43.415880   22923 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:16:43.415924   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:16:43.430904   22923 system_svc.go:56] duration metric: took 15.015476ms WaitForService to wait for kubelet
	I0927 00:16:43.430932   22923 kubeadm.go:582] duration metric: took 13.172753467s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:16:43.430948   22923 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:16:43.487452   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:43.487493   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.582042   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:43.610676   22923 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:16:43.610701   22923 node_conditions.go:123] node cpu capacity is 2
	I0927 00:16:43.610712   22923 node_conditions.go:105] duration metric: took 179.759493ms to run NodePressure ...
	I0927 00:16:43.610722   22923 start.go:241] waiting for startup goroutines ...
	I0927 00:16:43.812000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.992855   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:43.993405   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.094833   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:44.312025   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.488378   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:44.488875   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.580616   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:44.812847   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.987339   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.987844   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.081111   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:45.311986   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.488838   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:45.494394   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.588405   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:45.812585   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.988224   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.989896   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.082148   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:46.311599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.485928   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.488359   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:46.581225   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:46.811437   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.986958   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.988594   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:47.080381   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:47.311967   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.487137   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.487881   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:47.580513   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:47.812233   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.987205   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.988170   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:48.080591   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:48.312071   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.487224   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.488731   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:48.580104   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:48.811251   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.987100   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.987514   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.080480   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:49.311488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.486957   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:49.488676   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.580612   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:49.811224   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.990265   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.991510   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.082172   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:50.313347   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.486985   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.488717   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:50.582659   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:50.812000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.988005   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.988994   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:51.081167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:51.312257   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.486854   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.489465   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:51.580795   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:51.812289   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.987066   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.988257   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:52.081108   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:52.312912   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.486985   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.488399   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:52.581755   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:52.814422   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.987549   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.987829   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:53.080678   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:53.314523   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.488331   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.488764   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:53.580817   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:53.812217   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.986729   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.988945   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:54.080778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:54.312205   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.486448   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.487803   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:54.580761   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:54.811520   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.986634   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.988978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:55.080800   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:55.311991   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.490944   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.493634   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:55.580263   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:55.812139   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.987177   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.987367   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:56.081310   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:56.311167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:56.488842   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:56.488988   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:56.581030   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:56.812978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.543832   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:57.543896   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:57.544370   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.544723   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.550190   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:57.550636   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.581484   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:57.811591   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.988174   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.988206   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:58.081874   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:58.312600   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:58.486504   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.487586   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:58.580249   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:58.811581   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:58.986774   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.987922   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.080834   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:59.311658   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.487196   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:59.488229   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.580181   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:59.812375   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.988448   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.988687   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.080252   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:00.311409   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:00.487009   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.488155   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:00.581280   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:00.811845   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:00.987325   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.989570   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:01.080515   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:01.311993   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:01.487850   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.489334   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:01.580814   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:01.811806   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:01.986995   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.988430   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:02.080254   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:02.311725   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:02.487667   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:02.488220   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:02.580912   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.090517   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.090639   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:03.091263   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.091653   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.311887   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.487140   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.488145   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:03.581320   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.811596   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.987251   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.989014   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:04.081778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:04.312130   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:04.487412   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.488309   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:04.580589   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:04.811892   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:04.987356   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.987417   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:05.081474   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.311978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:05.487432   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.487863   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:05.580682   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.812085   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:05.988000   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.988066   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:06.080989   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.311398   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:06.486561   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.488291   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:06.580935   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.813281   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:06.986571   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.988032   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:07.080913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.314207   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:07.486814   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.488906   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:07.580735   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.812650   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:07.986719   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.987173   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:08.081186   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.311716   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:08.486681   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.487853   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:08.580832   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.812363   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:08.986729   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.988493   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:09.081403   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:09.312278   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:09.485989   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.487569   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:09.580021   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:09.810913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:09.987126   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.987866   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:10.080956   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:10.312137   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:10.487288   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.488658   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:10.580334   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:10.811041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:10.987011   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.987681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:11.080105   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:11.311345   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:11.486779   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:11.487979   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:11.581412   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:11.811943   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:11.987698   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:11.988990   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:12.080887   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:12.311909   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:12.489631   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:12.489995   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:12.588488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:12.811700   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:12.987600   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:12.988206   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:13.081015   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:13.311938   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:13.494362   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:13.494760   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:13.580352   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:13.812378   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:13.986892   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:13.988433   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:14.080520   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:14.312162   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:14.489857   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:14.494879   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:14.581191   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:14.811835   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:14.987031   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:14.988412   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:15.080463   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:15.312254   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:15.492564   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:15.492913   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:15.580514   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:15.811411   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:15.986710   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:15.988183   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:16.082151   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:16.311207   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:16.488013   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:16.488851   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:16.580681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:16.811685   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:16.987749   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:16.988504   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:17.080470   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:17.311695   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:17.486783   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:17.487109   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:17.581377   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:17.811534   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:17.986726   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:17.987427   22923 kapi.go:107] duration metric: took 40.003435933s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:17:18.081888   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:18.312758   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:18.487322   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:18.581069   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:18.811131   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:18.987552   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:19.081741   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:19.312438   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:19.486923   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:19.580490   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:19.811952   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:19.987035   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:20.081683   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:20.311815   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:20.487115   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:20.580786   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:20.812516   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:20.986767   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:21.081624   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:21.499313   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:21.500317   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:21.580769   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:21.812245   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:21.988673   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:22.080678   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:22.312325   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:22.486578   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:22.582419   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:22.811470   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:22.986785   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:23.080233   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:23.311183   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:23.486602   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:23.580948   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:23.812622   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:23.987481   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:24.081064   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:24.310966   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:24.486849   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:24.580734   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:24.811250   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:24.986458   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:25.083062   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:25.312905   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:25.488190   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:25.586419   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:25.812210   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:25.987787   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:26.081106   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:26.310603   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:26.503116   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:26.580733   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:26.812493   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:26.987376   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:27.080712   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.312863   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:27.486929   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:27.581037   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.811603   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:27.987405   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:28.080637   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.311085   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:28.486056   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:28.580113   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.811368   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:28.986515   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:29.081058   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.311442   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:29.486947   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:29.580754   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.811655   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:29.987571   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:30.080977   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.312032   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:30.486723   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:30.581611   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.811778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:30.987653   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:31.084236   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.311594   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:31.486542   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:31.581512   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.826040   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:31.987096   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:32.080580   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.312000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:32.487673   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:32.581375   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.812041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:32.988980   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:33.090694   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.312326   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:33.488231   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:33.580777   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.811345   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:33.986236   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:34.081390   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.312086   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:34.487244   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:34.581175   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.813913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:34.991040   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:35.090876   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.313501   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:35.486433   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:35.583246   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.811699   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:35.987680   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:36.080748   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.328503   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:36.488009   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:36.581253   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.810998   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:36.987755   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:37.080636   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.311688   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:37.486973   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:37.580599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.812272   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:37.986591   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:38.081184   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.311337   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:38.487175   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:38.581016   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.813136   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:38.987107   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:39.080496   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.312041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:39.486941   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:39.587727   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.811898   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:39.988300   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:40.081007   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.312655   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:40.486841   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:40.583017   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.814862   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:40.991378   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:41.084949   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.312488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:41.486705   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:41.583208   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.812185   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:41.987474   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:42.081648   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.320540   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:42.487828   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:42.588281   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.811937   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:42.987008   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:43.081062   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.312344   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:43.489462   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:43.580778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.812433   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:43.987514   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:44.087429   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.315287   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:44.487711   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:44.580200   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.811873   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.000196   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:45.080558   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.314997   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.492610   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:45.581681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.815128   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.987137   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:46.080783   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.312557   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:46.487720   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:46.583038   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.812051   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:46.986544   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:47.081350   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.311599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:47.487110   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:47.580700   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.812997   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:47.986922   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:48.080420   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.311397   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:48.486365   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:48.581127   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.815408   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:48.987143   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:49.080998   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.312595   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:49.486745   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:49.581175   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.812100   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:49.986765   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:50.080703   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.312173   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:50.487469   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:50.580789   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.813167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.004072   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:51.082921   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.315081   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.486907   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:51.582951   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.812667   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.986763   22923 kapi.go:107] duration metric: took 1m14.004357399s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:17:52.081726   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.312108   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:52.581247   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.811383   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:53.081164   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.311077   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:53.580614   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.811860   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:54.085731   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.311903   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:54.581015   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.812698   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:55.080114   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.312140   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:55.580929   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.812076   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:56.080795   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.315916   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:56.580324   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.813652   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:57.081490   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.318121   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:57.580543   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.813190   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:58.081274   22923 kapi.go:107] duration metric: took 1m17.004168732s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:17:58.083013   22923 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-364775 cluster.
	I0927 00:17:58.084321   22923 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:17:58.085650   22923 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:17:58.311273   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:58.813554   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:59.314920   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:59.811122   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:18:00.312742   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:18:00.813283   22923 kapi.go:107] duration metric: took 1m21.006383462s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:18:00.814917   22923 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0927 00:18:00.816192   22923 addons.go:510] duration metric: took 1m30.557986461s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher storage-provisioner ingress-dns nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0927 00:18:00.816230   22923 start.go:246] waiting for cluster config update ...
	I0927 00:18:00.816255   22923 start.go:255] writing updated cluster config ...
	I0927 00:18:00.816798   22923 ssh_runner.go:195] Run: rm -f paused
	I0927 00:18:00.876391   22923 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:18:00.878075   22923 out.go:177] * Done! kubectl is now configured to use "addons-364775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.572324700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396836572295957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:555086,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea265afa-7055-4696-ba94-be211796829b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.573162939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82558f77-0373-4574-bbee-93b654b1a7cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.573224104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82558f77-0373-4574-bbee-93b654b1a7cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.573616711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f56d28266df1f5faa6eb2355e55623fc62980f15fa25d3642e685605e783225,PodSandboxId:66d36199ff7c6e8bebcda3486789272f40a6b5883ae51bb6576ff20cec980828,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727396836151429044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19934d44-6957-4e22-a4ed-554922813c1b,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c7a7b89fdcdb5542ebcf95f92fabb57e4d7a583736007cf45a26e63e31c18,PodSandboxId:32d63f67722c094f97be96b7d6d654e020830df267fa24c7dba73013961757b0,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727396829916612141,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-eaf13455-05db-4681-afdd-103662b6f350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: db7e9f4c-1017-493b-9e63-01ff377e7cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"contai
nerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928,PodSandboxId:28d29f12c92040f5869c55adf339347ac519011a291021cbd08b8dd0b8d71f9e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727396270648135405,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lwpdj,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 05ceb4d4-fce0-42a2-955e-20ca7157e61d,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e0b6435d45d86ba1d6dc39dd8ceacf1c2a8cbac00201479cc3af3beb3a8bc465,PodSandboxId:5c06cf398461a183eaff98db964c09924a4d3e7240409efe4821afef6a8ab082,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e
1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396253074028946,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ljq5t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f172857-2e68-4af5-8e3b-01d68b6db792,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66ac2c2cec7c0d94b29d54fffae2e0817f1b1e23b57db1ef3954fdf0b7868f97,PodSandboxId:136447c84e896287a7df88ae9c408a61002b340122e57e84c8953edac27c8d14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kub
e-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396252934896617,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9h7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 186ee242-70b2-44fa-97c3-6e02dbe6c6db,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c28e478dcc66f75644898b30424c4f564d60b956f8b6bc5d0ca45a0361694fc,PodSandboxId:b4b4f9b1ecb2ba3f5de3e5a756687b3e554d5f68359b0642413afb471b3704e0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Im
age:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727396245062702054,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-97vtb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e021d3d3-622e-43b3-9858-36708a4962c1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99658b6248cb6d7adbe30adea145daee8d51d928679ddfe00ea21df214c6a9b,PodSandboxId:fb65094aed9fd19f176dd4424f6d4af9c7480e6aaa7b5b39abbd2a2e70ec20b0,M
etadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:808e53444d3e17ec94b5a0998f50e49632645aecb24b76a14447446319c7de4d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95f5e1a9b5a014c2bc7ffab89c7d11ed35734dd718de3307b0aa56e4114e7035,State:CONTAINER_EXITED,CreatedAt:1727396237188008960,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2rlvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5080c804-a6a8-4239-bd3f-a89d8f114f0c,},Annotations:map[string]string{io.kubernetes.container.hash: 91df2399,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0
f657323aa9c1bbb9aa24e608038de3f45bcbca9c513c22ad6d9bb0b5e6a9e5,PodSandboxId:01c685071267ae98d6510fe3d27218d712e34336bbdf7d86620b3f4db8e227c3,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727396232205071298,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-wtrdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89e689ae-58ff-4ed7-98ad-e9bc0f622024,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f275dd687cff30b8740e6f69dc6187675d046d74150405c9f82479ca5df3e9ed,PodSandboxId:c1e30cef3f033cafd465f272de5eba81715c40ca8436ee00d1145f2d3204512b,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727396224064997551,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-kdt5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652ee744-ff06-40fe-a66f-aabff5476e31,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubern
etes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d,PodSandboxId:aa4c5a90b10dfb776204e750d8bdb4bac6952bf40535eb60c1359852d6016ce4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727396220568625504,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 8bb056cc-4ad8-48da-bad9-aec78168a573,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727396201136847565,Labels:map[string]string{io.kubernetes.container
.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CON
TAINER_RUNNING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt
:1727396195134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063e
aaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1
75ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82558f77-0373-4574-bbee-93b654b1a7cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.607906214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=397d60a0-272a-470d-a8e4-93c228585994 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.608123753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=397d60a0-272a-470d-a8e4-93c228585994 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.609313216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8365e0b0-cb78-47d9-9a12-539db1993c06 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.610531953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396836610505375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:555086,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8365e0b0-cb78-47d9-9a12-539db1993c06 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.611193151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80c42dbb-172b-4793-8fbe-1875cbab2128 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.611561467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80c42dbb-172b-4793-8fbe-1875cbab2128 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.612012813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f56d28266df1f5faa6eb2355e55623fc62980f15fa25d3642e685605e783225,PodSandboxId:66d36199ff7c6e8bebcda3486789272f40a6b5883ae51bb6576ff20cec980828,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727396836151429044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19934d44-6957-4e22-a4ed-554922813c1b,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c7a7b89fdcdb5542ebcf95f92fabb57e4d7a583736007cf45a26e63e31c18,PodSandboxId:32d63f67722c094f97be96b7d6d654e020830df267fa24c7dba73013961757b0,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727396829916612141,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-eaf13455-05db-4681-afdd-103662b6f350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: db7e9f4c-1017-493b-9e63-01ff377e7cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"contai
nerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928,PodSandboxId:28d29f12c92040f5869c55adf339347ac519011a291021cbd08b8dd0b8d71f9e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727396270648135405,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lwpdj,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 05ceb4d4-fce0-42a2-955e-20ca7157e61d,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e0b6435d45d86ba1d6dc39dd8ceacf1c2a8cbac00201479cc3af3beb3a8bc465,PodSandboxId:5c06cf398461a183eaff98db964c09924a4d3e7240409efe4821afef6a8ab082,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e
1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396253074028946,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ljq5t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f172857-2e68-4af5-8e3b-01d68b6db792,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66ac2c2cec7c0d94b29d54fffae2e0817f1b1e23b57db1ef3954fdf0b7868f97,PodSandboxId:136447c84e896287a7df88ae9c408a61002b340122e57e84c8953edac27c8d14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kub
e-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396252934896617,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9h7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 186ee242-70b2-44fa-97c3-6e02dbe6c6db,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c28e478dcc66f75644898b30424c4f564d60b956f8b6bc5d0ca45a0361694fc,PodSandboxId:b4b4f9b1ecb2ba3f5de3e5a756687b3e554d5f68359b0642413afb471b3704e0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Im
age:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727396245062702054,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-97vtb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e021d3d3-622e-43b3-9858-36708a4962c1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99658b6248cb6d7adbe30adea145daee8d51d928679ddfe00ea21df214c6a9b,PodSandboxId:fb65094aed9fd19f176dd4424f6d4af9c7480e6aaa7b5b39abbd2a2e70ec20b0,M
etadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:808e53444d3e17ec94b5a0998f50e49632645aecb24b76a14447446319c7de4d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95f5e1a9b5a014c2bc7ffab89c7d11ed35734dd718de3307b0aa56e4114e7035,State:CONTAINER_EXITED,CreatedAt:1727396237188008960,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2rlvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5080c804-a6a8-4239-bd3f-a89d8f114f0c,},Annotations:map[string]string{io.kubernetes.container.hash: 91df2399,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0
f657323aa9c1bbb9aa24e608038de3f45bcbca9c513c22ad6d9bb0b5e6a9e5,PodSandboxId:01c685071267ae98d6510fe3d27218d712e34336bbdf7d86620b3f4db8e227c3,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727396232205071298,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-wtrdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89e689ae-58ff-4ed7-98ad-e9bc0f622024,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f275dd687cff30b8740e6f69dc6187675d046d74150405c9f82479ca5df3e9ed,PodSandboxId:c1e30cef3f033cafd465f272de5eba81715c40ca8436ee00d1145f2d3204512b,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727396224064997551,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-kdt5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652ee744-ff06-40fe-a66f-aabff5476e31,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubern
etes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d,PodSandboxId:aa4c5a90b10dfb776204e750d8bdb4bac6952bf40535eb60c1359852d6016ce4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727396220568625504,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 8bb056cc-4ad8-48da-bad9-aec78168a573,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727396201136847565,Labels:map[string]string{io.kubernetes.container
.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CON
TAINER_RUNNING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt
:1727396195134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063e
aaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1
75ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80c42dbb-172b-4793-8fbe-1875cbab2128 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.648568515Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5887a10c-b50b-4f75-9f10-b9aa4912d5aa name=/runtime.v1.RuntimeService/Version
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.648641286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5887a10c-b50b-4f75-9f10-b9aa4912d5aa name=/runtime.v1.RuntimeService/Version
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.649753966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd3ee487-8f30-46e3-bbde-d8999d660602 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.651005262Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396836650924448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:555086,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd3ee487-8f30-46e3-bbde-d8999d660602 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.651542578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c300ded-2e30-4249-b30c-d1f2e3794f9e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.651593395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c300ded-2e30-4249-b30c-d1f2e3794f9e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.652037145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f56d28266df1f5faa6eb2355e55623fc62980f15fa25d3642e685605e783225,PodSandboxId:66d36199ff7c6e8bebcda3486789272f40a6b5883ae51bb6576ff20cec980828,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727396836151429044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19934d44-6957-4e22-a4ed-554922813c1b,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c7a7b89fdcdb5542ebcf95f92fabb57e4d7a583736007cf45a26e63e31c18,PodSandboxId:32d63f67722c094f97be96b7d6d654e020830df267fa24c7dba73013961757b0,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727396829916612141,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-eaf13455-05db-4681-afdd-103662b6f350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: db7e9f4c-1017-493b-9e63-01ff377e7cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"contai
nerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928,PodSandboxId:28d29f12c92040f5869c55adf339347ac519011a291021cbd08b8dd0b8d71f9e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727396270648135405,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lwpdj,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 05ceb4d4-fce0-42a2-955e-20ca7157e61d,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e0b6435d45d86ba1d6dc39dd8ceacf1c2a8cbac00201479cc3af3beb3a8bc465,PodSandboxId:5c06cf398461a183eaff98db964c09924a4d3e7240409efe4821afef6a8ab082,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e
1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396253074028946,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ljq5t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f172857-2e68-4af5-8e3b-01d68b6db792,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66ac2c2cec7c0d94b29d54fffae2e0817f1b1e23b57db1ef3954fdf0b7868f97,PodSandboxId:136447c84e896287a7df88ae9c408a61002b340122e57e84c8953edac27c8d14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kub
e-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396252934896617,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9h7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 186ee242-70b2-44fa-97c3-6e02dbe6c6db,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c28e478dcc66f75644898b30424c4f564d60b956f8b6bc5d0ca45a0361694fc,PodSandboxId:b4b4f9b1ecb2ba3f5de3e5a756687b3e554d5f68359b0642413afb471b3704e0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Im
age:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727396245062702054,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-97vtb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e021d3d3-622e-43b3-9858-36708a4962c1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99658b6248cb6d7adbe30adea145daee8d51d928679ddfe00ea21df214c6a9b,PodSandboxId:fb65094aed9fd19f176dd4424f6d4af9c7480e6aaa7b5b39abbd2a2e70ec20b0,M
etadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:808e53444d3e17ec94b5a0998f50e49632645aecb24b76a14447446319c7de4d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95f5e1a9b5a014c2bc7ffab89c7d11ed35734dd718de3307b0aa56e4114e7035,State:CONTAINER_EXITED,CreatedAt:1727396237188008960,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2rlvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5080c804-a6a8-4239-bd3f-a89d8f114f0c,},Annotations:map[string]string{io.kubernetes.container.hash: 91df2399,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0
f657323aa9c1bbb9aa24e608038de3f45bcbca9c513c22ad6d9bb0b5e6a9e5,PodSandboxId:01c685071267ae98d6510fe3d27218d712e34336bbdf7d86620b3f4db8e227c3,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727396232205071298,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-wtrdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89e689ae-58ff-4ed7-98ad-e9bc0f622024,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f275dd687cff30b8740e6f69dc6187675d046d74150405c9f82479ca5df3e9ed,PodSandboxId:c1e30cef3f033cafd465f272de5eba81715c40ca8436ee00d1145f2d3204512b,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727396224064997551,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-kdt5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652ee744-ff06-40fe-a66f-aabff5476e31,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubern
etes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d,PodSandboxId:aa4c5a90b10dfb776204e750d8bdb4bac6952bf40535eb60c1359852d6016ce4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727396220568625504,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 8bb056cc-4ad8-48da-bad9-aec78168a573,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727396201136847565,Labels:map[string]string{io.kubernetes.container
.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CON
TAINER_RUNNING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt
:1727396195134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063e
aaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1
75ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c300ded-2e30-4249-b30c-d1f2e3794f9e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.692771957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14cfe65b-b969-4cef-bfb6-bf87c771ce05 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.692864845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14cfe65b-b969-4cef-bfb6-bf87c771ce05 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.694321179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76dd3ed5-8345-4db3-b7e3-43229d36d6e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.695546999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396836695521567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:555086,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76dd3ed5-8345-4db3-b7e3-43229d36d6e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.696312973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4ff9752-d97c-4555-8c74-7889f2b7287d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.696389003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4ff9752-d97c-4555-8c74-7889f2b7287d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:27:16 addons-364775 crio[667]: time="2024-09-27 00:27:16.696892926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f56d28266df1f5faa6eb2355e55623fc62980f15fa25d3642e685605e783225,PodSandboxId:66d36199ff7c6e8bebcda3486789272f40a6b5883ae51bb6576ff20cec980828,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727396836151429044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19934d44-6957-4e22-a4ed-554922813c1b,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c7a7b89fdcdb5542ebcf95f92fabb57e4d7a583736007cf45a26e63e31c18,PodSandboxId:32d63f67722c094f97be96b7d6d654e020830df267fa24c7dba73013961757b0,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727396829916612141,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-eaf13455-05db-4681-afdd-103662b6f350,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: db7e9f4c-1017-493b-9e63-01ff377e7cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"contai
nerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928,PodSandboxId:28d29f12c92040f5869c55adf339347ac519011a291021cbd08b8dd0b8d71f9e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727396270648135405,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lwpdj,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 05ceb4d4-fce0-42a2-955e-20ca7157e61d,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e0b6435d45d86ba1d6dc39dd8ceacf1c2a8cbac00201479cc3af3beb3a8bc465,PodSandboxId:5c06cf398461a183eaff98db964c09924a4d3e7240409efe4821afef6a8ab082,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e
1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396253074028946,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ljq5t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f172857-2e68-4af5-8e3b-01d68b6db792,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66ac2c2cec7c0d94b29d54fffae2e0817f1b1e23b57db1ef3954fdf0b7868f97,PodSandboxId:136447c84e896287a7df88ae9c408a61002b340122e57e84c8953edac27c8d14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kub
e-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396252934896617,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9h7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 186ee242-70b2-44fa-97c3-6e02dbe6c6db,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c28e478dcc66f75644898b30424c4f564d60b956f8b6bc5d0ca45a0361694fc,PodSandboxId:b4b4f9b1ecb2ba3f5de3e5a756687b3e554d5f68359b0642413afb471b3704e0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Im
age:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727396245062702054,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-97vtb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e021d3d3-622e-43b3-9858-36708a4962c1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99658b6248cb6d7adbe30adea145daee8d51d928679ddfe00ea21df214c6a9b,PodSandboxId:fb65094aed9fd19f176dd4424f6d4af9c7480e6aaa7b5b39abbd2a2e70ec20b0,M
etadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:808e53444d3e17ec94b5a0998f50e49632645aecb24b76a14447446319c7de4d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95f5e1a9b5a014c2bc7ffab89c7d11ed35734dd718de3307b0aa56e4114e7035,State:CONTAINER_EXITED,CreatedAt:1727396237188008960,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2rlvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5080c804-a6a8-4239-bd3f-a89d8f114f0c,},Annotations:map[string]string{io.kubernetes.container.hash: 91df2399,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0
f657323aa9c1bbb9aa24e608038de3f45bcbca9c513c22ad6d9bb0b5e6a9e5,PodSandboxId:01c685071267ae98d6510fe3d27218d712e34336bbdf7d86620b3f4db8e227c3,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727396232205071298,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-wtrdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89e689ae-58ff-4ed7-98ad-e9bc0f622024,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f275dd687cff30b8740e6f69dc6187675d046d74150405c9f82479ca5df3e9ed,PodSandboxId:c1e30cef3f033cafd465f272de5eba81715c40ca8436ee00d1145f2d3204512b,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727396224064997551,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-kdt5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652ee744-ff06-40fe-a66f-aabff5476e31,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubern
etes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d,PodSandboxId:aa4c5a90b10dfb776204e750d8bdb4bac6952bf40535eb60c1359852d6016ce4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727396220568625504,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 8bb056cc-4ad8-48da-bad9-aec78168a573,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727396201136847565,Labels:map[string]string{io.kubernetes.container
.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CON
TAINER_RUNNING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt
:1727396195134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063e
aaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1
75ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4ff9752-d97c-4555-8c74-7889f2b7287d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	2f56d28266df1       docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f                            Less than a second ago   Exited              busybox                   0                   66d36199ff7c6       test-local-path
	041c7a7b89fdc       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                            6 seconds ago            Exited              helper-pod                0                   32d63f67722c0       helper-pod-create-pvc-eaf13455-05db-4681-afdd-103662b6f350
	34468cf471df6       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              19 seconds ago           Running             nginx                     0                   d1dd36f55b9f4       nginx
	44f5c0760c47e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago            Running             gcp-auth                  0                   3f91389aebb94       gcp-auth-89d5ffd79-xndcj
	4f91cdf813e05       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago            Running             controller                0                   28d29f12c9204       ingress-nginx-controller-bc57996ff-lwpdj
	e0b6435d45d86       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago            Exited              patch                     0                   5c06cf398461a       ingress-nginx-admission-patch-ljq5t
	66ac2c2cec7c0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago            Exited              create                    0                   136447c84e896       ingress-nginx-admission-create-s9h7h
	8c28e478dcc66       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             9 minutes ago            Running             local-path-provisioner    0                   b4b4f9b1ecb2b       local-path-provisioner-86d989889c-97vtb
	d99658b6248cb       gcr.io/k8s-minikube/kube-registry-proxy@sha256:808e53444d3e17ec94b5a0998f50e49632645aecb24b76a14447446319c7de4d              9 minutes ago            Exited              registry-proxy            0                   fb65094aed9fd       registry-proxy-2rlvs
	a0f657323aa9c       gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf               10 minutes ago           Running             cloud-spanner-emulator    0                   01c685071267a       cloud-spanner-emulator-5b584cc74-wtrdk
	f275dd687cff3       docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7                           10 minutes ago           Exited              registry                  0                   c1e30cef3f033       registry-66c9cd494c-kdt5f
	783b25dfa3713       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago           Running             minikube-ingress-dns      0                   aa4c5a90b10df       kube-ingress-dns-minikube
	77e2cbcfd0c9c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago           Running             metrics-server            0                   e55373ee38096       metrics-server-84c5f94fbc-h74zz
	2392c10311ecb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago           Running             storage-provisioner       0                   c88fbf538e039       storage-provisioner
	eb092a183ee87       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago           Running             coredns                   0                   9c525627d0e81       coredns-7c65d6cfc9-gd2h2
	fa7e6a02565d0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago           Running             kube-proxy                0                   24f13f826689a       kube-proxy-vj2cl
	ee201c0719a52       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago           Running             etcd                      0                   27e2544550560       etcd-addons-364775
	941f64fde84f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             10 minutes ago           Running             kube-apiserver            0                   4602faee6ddea       kube-apiserver-addons-364775
	7d21d052488b3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             10 minutes ago           Running             kube-scheduler            0                   6bb1edfce2faf       kube-scheduler-addons-364775
	02d48ea4cc0d3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             10 minutes ago           Running             kube-controller-manager   0                   81dc5c65d7d85       kube-controller-manager-addons-364775
	
	
	==> coredns [eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2] <==
	[INFO] 127.0.0.1:50766 - 8775 "HINFO IN 3569014972345960485.1862048380583480753. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014022704s
	[INFO] 10.244.0.7:39054 - 16199 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000318748s
	[INFO] 10.244.0.7:39054 - 31015 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000093499s
	[INFO] 10.244.0.7:39054 - 24769 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000150069s
	[INFO] 10.244.0.7:39054 - 3407 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000172928s
	[INFO] 10.244.0.7:39054 - 53162 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097552s
	[INFO] 10.244.0.7:39054 - 32704 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00006962s
	[INFO] 10.244.0.7:39054 - 46163 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114352s
	[INFO] 10.244.0.7:39054 - 45726 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000079808s
	[INFO] 10.244.0.7:55575 - 58922 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122896s
	[INFO] 10.244.0.7:55575 - 58635 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000056553s
	[INFO] 10.244.0.7:34701 - 2635 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052467s
	[INFO] 10.244.0.7:34701 - 2443 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088571s
	[INFO] 10.244.0.7:53770 - 29791 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083808s
	[INFO] 10.244.0.7:53770 - 29618 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043278s
	[INFO] 10.244.0.7:51278 - 32481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061908s
	[INFO] 10.244.0.7:51278 - 32630 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010053s
	[INFO] 10.244.0.21:39399 - 32421 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000626795s
	[INFO] 10.244.0.21:51047 - 35722 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000173759s
	[INFO] 10.244.0.21:59883 - 41503 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105903s
	[INFO] 10.244.0.21:43597 - 17694 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000060022s
	[INFO] 10.244.0.21:58239 - 38522 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106047s
	[INFO] 10.244.0.21:38772 - 6309 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000376339s
	[INFO] 10.244.0.21:41727 - 3859 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001416366s
	[INFO] 10.244.0.21:49529 - 27922 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001747962s
	
	
	==> describe nodes <==
	Name:               addons-364775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-364775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-364775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_16_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-364775
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:16:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-364775
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:27:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:26:28 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:26:28 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:26:28 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:26:28 +0000   Fri, 27 Sep 2024 00:16:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    addons-364775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c20e89c92c64839b60418c495bf40ff
	  System UUID:                9c20e89c-92c6-4839-b604-18c495bf40ff
	  Boot ID:                    de047c3a-8269-46a9-afd9-1cfad2a2ee3d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-5b584cc74-wtrdk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     test-local-path                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  gcp-auth                    gcp-auth-89d5ffd79-xndcj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lwpdj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-gd2h2                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-364775                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-364775                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-364775       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vj2cl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-364775                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-h74zz             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-97vtb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-364775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-364775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-364775 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-364775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-364775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-364775 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-364775 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-364775 event: Registered Node addons-364775 in Controller
	
	
	==> dmesg <==
	[  +5.493475] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.264050] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +4.760640] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.471676] kauditd_printk_skb: 137 callbacks suppressed
	[ +11.036796] kauditd_printk_skb: 79 callbacks suppressed
	[Sep27 00:17] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.888391] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.910967] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.507302] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.437195] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.152093] kauditd_printk_skb: 43 callbacks suppressed
	[ +10.173097] kauditd_printk_skb: 6 callbacks suppressed
	[Sep27 00:18] kauditd_printk_skb: 55 callbacks suppressed
	[Sep27 00:19] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:20] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:23] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:26] kauditd_printk_skb: 28 callbacks suppressed
	[ +10.244894] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.025310] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.494292] kauditd_printk_skb: 7 callbacks suppressed
	[ +24.636506] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.016348] kauditd_printk_skb: 38 callbacks suppressed
	[Sep27 00:27] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.266598] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.065698] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754] <==
	{"level":"warn","ts":"2024-09-27T00:26:14.632664Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:14.223889Z","time spent":"408.764445ms","remote":"127.0.0.1:41964","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":1458,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc\" "}
	{"level":"warn","ts":"2024-09-27T00:26:14.632888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.794288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-27T00:26:14.632907Z","caller":"traceutil/trace.go:171","msg":"trace[446996669] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1984; }","duration":"376.812569ms","start":"2024-09-27T00:26:14.256088Z","end":"2024-09-27T00:26:14.632900Z","steps":["trace[446996669] 'range keys from in-memory index tree'  (duration: 376.72393ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:14.632926Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:14.256037Z","time spent":"376.885356ms","remote":"127.0.0.1:41978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1138,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-27T00:26:14.633120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.527313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-09-27T00:26:14.633138Z","caller":"traceutil/trace.go:171","msg":"trace[1605129029] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1984; }","duration":"307.54545ms","start":"2024-09-27T00:26:14.325586Z","end":"2024-09-27T00:26:14.633132Z","steps":["trace[1605129029] 'range keys from in-memory index tree'  (duration: 307.476662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:14.633154Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:14.325554Z","time spent":"307.597008ms","remote":"127.0.0.1:42020","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":207,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
	{"level":"warn","ts":"2024-09-27T00:26:14.633233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.753441ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:14.633249Z","caller":"traceutil/trace.go:171","msg":"trace[617859409] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1984; }","duration":"272.780609ms","start":"2024-09-27T00:26:14.360462Z","end":"2024-09-27T00:26:14.633243Z","steps":["trace[617859409] 'range keys from in-memory index tree'  (duration: 272.748633ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:14.633315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.17705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-27T00:26:14.633328Z","caller":"traceutil/trace.go:171","msg":"trace[942278298] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1984; }","duration":"236.191523ms","start":"2024-09-27T00:26:14.397131Z","end":"2024-09-27T00:26:14.633323Z","steps":["trace[942278298] 'count revisions from in-memory index tree'  (duration: 236.13798ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:26:20.180521Z","caller":"traceutil/trace.go:171","msg":"trace[1946292725] linearizableReadLoop","detail":"{readStateIndex:2155; appliedIndex:2154; }","duration":"169.940839ms","start":"2024-09-27T00:26:20.010565Z","end":"2024-09-27T00:26:20.180506Z","steps":["trace[1946292725] 'read index received'  (duration: 168.170478ms)","trace[1946292725] 'applied index is now lower than readState.Index'  (duration: 1.769835ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:26:20.180632Z","caller":"traceutil/trace.go:171","msg":"trace[119175638] transaction","detail":"{read_only:false; response_revision:2010; number_of_response:1; }","duration":"185.041203ms","start":"2024-09-27T00:26:19.995581Z","end":"2024-09-27T00:26:20.180622Z","steps":["trace[119175638] 'process raft request'  (duration: 183.199927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:20.180763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.179973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-27T00:26:20.180783Z","caller":"traceutil/trace.go:171","msg":"trace[929737590] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:2010; }","duration":"170.214606ms","start":"2024-09-27T00:26:20.010561Z","end":"2024-09-27T00:26:20.180775Z","steps":["trace[929737590] 'agreement among raft nodes before linearized reading'  (duration: 170.14061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:20.180846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.335773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:20.180885Z","caller":"traceutil/trace.go:171","msg":"trace[1760975757] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2010; }","duration":"102.380651ms","start":"2024-09-27T00:26:20.078497Z","end":"2024-09-27T00:26:20.180878Z","steps":["trace[1760975757] 'agreement among raft nodes before linearized reading'  (duration: 102.322144ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:26:20.844201Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1536}
	{"level":"info","ts":"2024-09-27T00:26:20.885577Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1536,"took":"40.935931ms","hash":3628088381,"current-db-size-bytes":6135808,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3530752,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-27T00:26:20.885633Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3628088381,"revision":1536,"compact-revision":-1}
	{"level":"info","ts":"2024-09-27T00:26:47.157301Z","caller":"traceutil/trace.go:171","msg":"trace[683330143] linearizableReadLoop","detail":"{readStateIndex:2316; appliedIndex:2315; }","duration":"248.104512ms","start":"2024-09-27T00:26:46.909171Z","end":"2024-09-27T00:26:47.157276Z","steps":["trace[683330143] 'read index received'  (duration: 247.914744ms)","trace[683330143] 'applied index is now lower than readState.Index'  (duration: 188.919µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:26:47.157488Z","caller":"traceutil/trace.go:171","msg":"trace[1122576871] transaction","detail":"{read_only:false; response_revision:2162; number_of_response:1; }","duration":"349.484715ms","start":"2024-09-27T00:26:46.807988Z","end":"2024-09-27T00:26:47.157473Z","steps":["trace[1122576871] 'process raft request'  (duration: 349.152553ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:47.158481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:46.807932Z","time spent":"350.369978ms","remote":"127.0.0.1:41978","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2157 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-27T00:26:47.157668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.429269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:47.158706Z","caller":"traceutil/trace.go:171","msg":"trace[1301308464] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2162; }","duration":"249.522193ms","start":"2024-09-27T00:26:46.909168Z","end":"2024-09-27T00:26:47.158690Z","steps":["trace[1301308464] 'agreement among raft nodes before linearized reading'  (duration: 248.407046ms)"],"step_count":1}
	
	
	==> gcp-auth [44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe] <==
	2024/09/27 00:17:57 GCP Auth Webhook started!
	2024/09/27 00:18:00 Ready to marshal response ...
	2024/09/27 00:18:00 Ready to write response ...
	2024/09/27 00:18:01 Ready to marshal response ...
	2024/09/27 00:18:01 Ready to write response ...
	2024/09/27 00:18:01 Ready to marshal response ...
	2024/09/27 00:18:01 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:14 Ready to marshal response ...
	2024/09/27 00:26:14 Ready to write response ...
	2024/09/27 00:26:14 Ready to marshal response ...
	2024/09/27 00:26:14 Ready to write response ...
	2024/09/27 00:26:49 Ready to marshal response ...
	2024/09/27 00:26:49 Ready to write response ...
	2024/09/27 00:26:54 Ready to marshal response ...
	2024/09/27 00:26:54 Ready to write response ...
	2024/09/27 00:27:06 Ready to marshal response ...
	2024/09/27 00:27:06 Ready to write response ...
	2024/09/27 00:27:06 Ready to marshal response ...
	2024/09/27 00:27:06 Ready to write response ...
	
	
	==> kernel <==
	 00:27:17 up 11 min,  0 users,  load average: 0.82, 0.68, 0.47
	Linux addons-364775 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0927 00:17:45.543687       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	E0927 00:17:45.547748       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	E0927 00:17:45.559080       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	I0927 00:17:45.702853       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0927 00:26:04.624102       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.136.26"}
	I0927 00:26:29.141449       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 00:26:33.135917       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0927 00:26:34.161474       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0927 00:26:54.854249       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0927 00:26:55.039695       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.233.173"}
	I0927 00:27:06.182818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.185454       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.203612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.203649       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.226306       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.226388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.238166       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.238291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.268140       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.268284       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:27:07.236715       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:27:07.269274       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:27:07.372522       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee] <==
	I0927 00:27:06.299414       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="6.573µs"
	E0927 00:27:07.238549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0927 00:27:07.271216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0927 00:27:07.374402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:08.313446       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:08.313572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:08.577675       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:08.577722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:08.796884       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:08.797092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:09.614452       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:09.614571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:10.332722       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:10.332793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:10.812158       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:10.812264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:10.931908       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:10.932147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:14.991214       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:14.991256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:15.256065       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:15.256132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:27:15.290714       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:27:15.290769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:27:15.490672       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.539µs"
	
	
	==> kube-proxy [fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:16:31.768151       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:16:31.776690       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.169"]
	E0927 00:16:31.776745       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:16:31.867724       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:16:31.867754       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:16:31.867779       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:16:31.872020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:16:31.872322       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:16:31.872352       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:16:31.876064       1 config.go:328] "Starting node config controller"
	I0927 00:16:31.876094       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:16:31.876473       1 config.go:199] "Starting service config controller"
	I0927 00:16:31.876483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:16:31.876500       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:16:31.876504       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:16:31.977065       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:16:31.977110       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:16:31.977424       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54] <==
	W0927 00:16:22.386330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:16:22.386360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.386430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:16:22.386640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.386867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:16:22.388785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.389761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:16:22.394000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.238556       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:16:23.238927       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:16:23.244304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:16:23.244370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.281738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:16:23.282013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.416794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:16:23.417002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.467991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:16:23.468110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.603228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:16:23.603279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.603337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:16:23.603364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.619906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:16:23.619937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:16:26.272381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:27:12 addons-364775 kubelet[1215]: I0927 00:27:12.745740    1215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d63f67722c094f97be96b7d6d654e020830df267fa24c7dba73013961757b0"
	Sep 27 00:27:12 addons-364775 kubelet[1215]: E0927 00:27:12.823753    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7b7dbf55-2e42-4482-a77e-05baf4945f79"
	Sep 27 00:27:12 addons-364775 kubelet[1215]: I0927 00:27:12.826881    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7e9f4c-1017-493b-9e63-01ff377e7cfb" path="/var/lib/kubelet/pods/db7e9f4c-1017-493b-9e63-01ff377e7cfb/volumes"
	Sep 27 00:27:12 addons-364775 kubelet[1215]: E0927 00:27:12.936579    1215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db7e9f4c-1017-493b-9e63-01ff377e7cfb" containerName="helper-pod"
	Sep 27 00:27:12 addons-364775 kubelet[1215]: I0927 00:27:12.937126    1215 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7e9f4c-1017-493b-9e63-01ff377e7cfb" containerName="helper-pod"
	Sep 27 00:27:13 addons-364775 kubelet[1215]: I0927 00:27:13.019876    1215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/19934d44-6957-4e22-a4ed-554922813c1b-gcp-creds\") pod \"test-local-path\" (UID: \"19934d44-6957-4e22-a4ed-554922813c1b\") " pod="default/test-local-path"
	Sep 27 00:27:13 addons-364775 kubelet[1215]: I0927 00:27:13.019938    1215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eaf13455-05db-4681-afdd-103662b6f350\" (UniqueName: \"kubernetes.io/host-path/19934d44-6957-4e22-a4ed-554922813c1b-pvc-eaf13455-05db-4681-afdd-103662b6f350\") pod \"test-local-path\" (UID: \"19934d44-6957-4e22-a4ed-554922813c1b\") " pod="default/test-local-path"
	Sep 27 00:27:13 addons-364775 kubelet[1215]: I0927 00:27:13.020008    1215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xlkv\" (UniqueName: \"kubernetes.io/projected/19934d44-6957-4e22-a4ed-554922813c1b-kube-api-access-7xlkv\") pod \"test-local-path\" (UID: \"19934d44-6957-4e22-a4ed-554922813c1b\") " pod="default/test-local-path"
	Sep 27 00:27:15 addons-364775 kubelet[1215]: E0927 00:27:15.077508    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396835076831113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:537693,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:27:15 addons-364775 kubelet[1215]: E0927 00:27:15.077538    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396835076831113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:537693,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:27:15 addons-364775 kubelet[1215]: I0927 00:27:15.138638    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzqcz\" (UniqueName: \"kubernetes.io/projected/01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea-kube-api-access-rzqcz\") pod \"01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea\" (UID: \"01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea\") "
	Sep 27 00:27:15 addons-364775 kubelet[1215]: I0927 00:27:15.138697    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea-gcp-creds\") pod \"01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea\" (UID: \"01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea\") "
	Sep 27 00:27:15 addons-364775 kubelet[1215]: I0927 00:27:15.138776    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea" (UID: "01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:27:15 addons-364775 kubelet[1215]: I0927 00:27:15.143084    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea-kube-api-access-rzqcz" (OuterVolumeSpecName: "kube-api-access-rzqcz") pod "01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea" (UID: "01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea"). InnerVolumeSpecName "kube-api-access-rzqcz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:27:15 addons-364775 kubelet[1215]: I0927 00:27:15.239907    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rzqcz\" (UniqueName: \"kubernetes.io/projected/01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea-kube-api-access-rzqcz\") on node \"addons-364775\" DevicePath \"\""
	Sep 27 00:27:15 addons-364775 kubelet[1215]: I0927 00:27:15.239936    1215 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea-gcp-creds\") on node \"addons-364775\" DevicePath \"\""
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.149189    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsxwb\" (UniqueName: \"kubernetes.io/projected/5080c804-a6a8-4239-bd3f-a89d8f114f0c-kube-api-access-hsxwb\") pod \"5080c804-a6a8-4239-bd3f-a89d8f114f0c\" (UID: \"5080c804-a6a8-4239-bd3f-a89d8f114f0c\") "
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.149507    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4tx\" (UniqueName: \"kubernetes.io/projected/652ee744-ff06-40fe-a66f-aabff5476e31-kube-api-access-nb4tx\") pod \"652ee744-ff06-40fe-a66f-aabff5476e31\" (UID: \"652ee744-ff06-40fe-a66f-aabff5476e31\") "
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.157903    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5080c804-a6a8-4239-bd3f-a89d8f114f0c-kube-api-access-hsxwb" (OuterVolumeSpecName: "kube-api-access-hsxwb") pod "5080c804-a6a8-4239-bd3f-a89d8f114f0c" (UID: "5080c804-a6a8-4239-bd3f-a89d8f114f0c"). InnerVolumeSpecName "kube-api-access-hsxwb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.166204    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652ee744-ff06-40fe-a66f-aabff5476e31-kube-api-access-nb4tx" (OuterVolumeSpecName: "kube-api-access-nb4tx") pod "652ee744-ff06-40fe-a66f-aabff5476e31" (UID: "652ee744-ff06-40fe-a66f-aabff5476e31"). InnerVolumeSpecName "kube-api-access-nb4tx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.250178    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hsxwb\" (UniqueName: \"kubernetes.io/projected/5080c804-a6a8-4239-bd3f-a89d8f114f0c-kube-api-access-hsxwb\") on node \"addons-364775\" DevicePath \"\""
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.250222    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nb4tx\" (UniqueName: \"kubernetes.io/projected/652ee744-ff06-40fe-a66f-aabff5476e31-kube-api-access-nb4tx\") on node \"addons-364775\" DevicePath \"\""
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.796672    1215 scope.go:117] "RemoveContainer" containerID="d99658b6248cb6d7adbe30adea145daee8d51d928679ddfe00ea21df214c6a9b"
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.842303    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea" path="/var/lib/kubelet/pods/01d30ed2-4f27-4b9c-8c0d-e9a2b627a7ea/volumes"
	Sep 27 00:27:16 addons-364775 kubelet[1215]: I0927 00:27:16.872871    1215 scope.go:117] "RemoveContainer" containerID="f275dd687cff30b8740e6f69dc6187675d046d74150405c9f82479ca5df3e9ed"
	
	
	==> storage-provisioner [2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e] <==
	I0927 00:16:37.916328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:16:38.076551       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:16:38.076614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:16:38.159162       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:16:38.159377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-364775_daea0619-9535-4149-a165-9a8f7ab27789!
	I0927 00:16:38.160542       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88a6a7b1-44d1-4b8a-9c87-da3ce2ecdc13", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-364775_daea0619-9535-4149-a165-9a8f7ab27789 became leader
	I0927 00:16:38.760305       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-364775_daea0619-9535-4149-a165-9a8f7ab27789!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-364775 -n addons-364775
helpers_test.go:261: (dbg) Run:  kubectl --context addons-364775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path ingress-nginx-admission-create-s9h7h ingress-nginx-admission-patch-ljq5t
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-364775 describe pod busybox test-local-path ingress-nginx-admission-create-s9h7h ingress-nginx-admission-patch-ljq5t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-364775 describe pod busybox test-local-path ingress-nginx-admission-create-s9h7h ingress-nginx-admission-patch-ljq5t: exit status 1 (107.900462ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-364775/192.168.39.169
	Start Time:       Fri, 27 Sep 2024 00:18:01 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxclv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wxclv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m17s                  default-scheduler  Successfully assigned default/busybox to addons-364775
	  Normal   Pulling    7m47s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m16s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m36s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-364775/192.168.39.169
	Start Time:       Fri, 27 Sep 2024 00:27:12 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  cri-o://2f56d28266df1f5faa6eb2355e55623fc62980f15fa25d3642e685605e783225
	    Image:         busybox:stable
	    Image ID:      6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 27 Sep 2024 00:27:16 +0000
	      Finished:     Fri, 27 Sep 2024 00:27:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7xlkv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-7xlkv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6s    default-scheduler  Successfully assigned default/test-local-path to addons-364775
	  Normal  Pulling    5s    kubelet            Pulling image "busybox:stable"
	  Normal  Pulled     2s    kubelet            Successfully pulled image "busybox:stable" in 2.669s (2.669s including waiting). Image size: 4507152 bytes.
	  Normal  Created    2s    kubelet            Created container busybox
	  Normal  Started    2s    kubelet            Started container busybox

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s9h7h" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ljq5t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-364775 describe pod busybox test-local-path ingress-nginx-admission-create-s9h7h ingress-nginx-admission-patch-ljq5t: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.23s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (149.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-364775 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-364775 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-364775 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [79a9bd72-f93d-4276-b274-754e05f94f32] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [79a9bd72-f93d-4276-b274-754e05f94f32] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004194677s
I0927 00:27:04.081666   22138 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-364775 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.744826312s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-364775 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.169
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 addons disable ingress-dns --alsologtostderr -v=1: (1.036265285s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 addons disable ingress --alsologtostderr -v=1: (7.704564121s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-364775 -n addons-364775
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 logs -n 25: (1.249837481s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-528649                                                                     | download-only-528649 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-603097                                                                     | download-only-603097 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-528649                                                                     | download-only-528649 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-381196 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | binary-mirror-381196                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32921                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-381196                                                                     | binary-mirror-381196 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| addons  | enable dashboard -p                                                                         | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | addons-364775                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | addons-364775                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-364775 --wait=true                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:18 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | -p addons-364775                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | addons-364775                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | -p addons-364775                                                                            |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-364775 addons                                                                        | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-364775 ssh curl -s                                                                   | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-364775 addons                                                                        | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-364775 ip                                                                            | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-364775 ssh cat                                                                       | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | /opt/local-path-provisioner/pvc-eaf13455-05db-4681-afdd-103662b6f350_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | addons-364775                                                                               |                      |         |         |                     |                     |
	| ip      | addons-364775 ip                                                                            | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:29 UTC | 27 Sep 24 00:29 UTC |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:29 UTC | 27 Sep 24 00:29 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:29 UTC | 27 Sep 24 00:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:15:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:15:44.537636   22923 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:15:44.537740   22923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:44.537749   22923 out.go:358] Setting ErrFile to fd 2...
	I0927 00:15:44.537753   22923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:44.537907   22923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:15:44.538451   22923 out.go:352] Setting JSON to false
	I0927 00:15:44.539227   22923 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3490,"bootTime":1727392655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:15:44.539333   22923 start.go:139] virtualization: kvm guest
	I0927 00:15:44.541421   22923 out.go:177] * [addons-364775] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:15:44.542612   22923 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:15:44.542608   22923 notify.go:220] Checking for updates...
	I0927 00:15:44.544937   22923 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:15:44.546076   22923 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:15:44.547130   22923 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:44.548170   22923 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:15:44.549152   22923 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:15:44.550537   22923 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:15:44.580671   22923 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 00:15:44.581804   22923 start.go:297] selected driver: kvm2
	I0927 00:15:44.581814   22923 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:15:44.581825   22923 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:15:44.582527   22923 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:44.582595   22923 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:15:44.596734   22923 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:15:44.596791   22923 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:15:44.597022   22923 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:15:44.597049   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:15:44.597085   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:15:44.597092   22923 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:15:44.597139   22923 start.go:340] cluster config:
	{Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:44.597233   22923 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:44.598769   22923 out.go:177] * Starting "addons-364775" primary control-plane node in "addons-364775" cluster
	I0927 00:15:44.599805   22923 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:15:44.599844   22923 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:15:44.599854   22923 cache.go:56] Caching tarball of preloaded images
	I0927 00:15:44.599915   22923 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:15:44.599926   22923 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:15:44.600208   22923 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json ...
	I0927 00:15:44.600224   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json: {Name:mk7d83f0775700fae5c444ee1119498cda71b7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:44.600357   22923 start.go:360] acquireMachinesLock for addons-364775: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:15:44.600399   22923 start.go:364] duration metric: took 29.224µs to acquireMachinesLock for "addons-364775"
	I0927 00:15:44.600416   22923 start.go:93] Provisioning new machine with config: &{Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:15:44.600461   22923 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 00:15:44.602317   22923 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0927 00:15:44.602440   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:15:44.602479   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:44.616122   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0927 00:15:44.616559   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:44.617071   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:15:44.617091   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:44.617371   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:44.617525   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:15:44.617640   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:15:44.617745   22923 start.go:159] libmachine.API.Create for "addons-364775" (driver="kvm2")
	I0927 00:15:44.617772   22923 client.go:168] LocalClient.Create starting
	I0927 00:15:44.617816   22923 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:15:44.773115   22923 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:15:45.021396   22923 main.go:141] libmachine: Running pre-create checks...
	I0927 00:15:45.021422   22923 main.go:141] libmachine: (addons-364775) Calling .PreCreateCheck
	I0927 00:15:45.021848   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:15:45.022228   22923 main.go:141] libmachine: Creating machine...
	I0927 00:15:45.022241   22923 main.go:141] libmachine: (addons-364775) Calling .Create
	I0927 00:15:45.022354   22923 main.go:141] libmachine: (addons-364775) Creating KVM machine...
	I0927 00:15:45.023487   22923 main.go:141] libmachine: (addons-364775) DBG | found existing default KVM network
	I0927 00:15:45.024131   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.024009   22945 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0927 00:15:45.024171   22923 main.go:141] libmachine: (addons-364775) DBG | created network xml: 
	I0927 00:15:45.024195   22923 main.go:141] libmachine: (addons-364775) DBG | <network>
	I0927 00:15:45.024208   22923 main.go:141] libmachine: (addons-364775) DBG |   <name>mk-addons-364775</name>
	I0927 00:15:45.024226   22923 main.go:141] libmachine: (addons-364775) DBG |   <dns enable='no'/>
	I0927 00:15:45.024270   22923 main.go:141] libmachine: (addons-364775) DBG |   
	I0927 00:15:45.024294   22923 main.go:141] libmachine: (addons-364775) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 00:15:45.024303   22923 main.go:141] libmachine: (addons-364775) DBG |     <dhcp>
	I0927 00:15:45.024311   22923 main.go:141] libmachine: (addons-364775) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 00:15:45.024318   22923 main.go:141] libmachine: (addons-364775) DBG |     </dhcp>
	I0927 00:15:45.024325   22923 main.go:141] libmachine: (addons-364775) DBG |   </ip>
	I0927 00:15:45.024331   22923 main.go:141] libmachine: (addons-364775) DBG |   
	I0927 00:15:45.024337   22923 main.go:141] libmachine: (addons-364775) DBG | </network>
	I0927 00:15:45.024345   22923 main.go:141] libmachine: (addons-364775) DBG | 
	I0927 00:15:45.029333   22923 main.go:141] libmachine: (addons-364775) DBG | trying to create private KVM network mk-addons-364775 192.168.39.0/24...
	I0927 00:15:45.091813   22923 main.go:141] libmachine: (addons-364775) DBG | private KVM network mk-addons-364775 192.168.39.0/24 created
	I0927 00:15:45.091853   22923 main.go:141] libmachine: (addons-364775) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 ...
	I0927 00:15:45.091879   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.091772   22945 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:45.091922   22923 main.go:141] libmachine: (addons-364775) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:15:45.091959   22923 main.go:141] libmachine: (addons-364775) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:15:45.348792   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.348685   22945 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa...
	I0927 00:15:45.574205   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.574081   22945 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/addons-364775.rawdisk...
	I0927 00:15:45.574239   22923 main.go:141] libmachine: (addons-364775) DBG | Writing magic tar header
	I0927 00:15:45.574255   22923 main.go:141] libmachine: (addons-364775) DBG | Writing SSH key tar header
	I0927 00:15:45.574273   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.574195   22945 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 ...
	I0927 00:15:45.574290   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775
	I0927 00:15:45.574318   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:15:45.574327   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:45.574338   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:15:45.574351   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:15:45.574364   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 (perms=drwx------)
	I0927 00:15:45.574372   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:15:45.574384   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home
	I0927 00:15:45.574390   22923 main.go:141] libmachine: (addons-364775) DBG | Skipping /home - not owner
	I0927 00:15:45.574400   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:15:45.574428   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:15:45.574447   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:15:45.574477   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:15:45.574496   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:15:45.574506   22923 main.go:141] libmachine: (addons-364775) Creating domain...
	I0927 00:15:45.575497   22923 main.go:141] libmachine: (addons-364775) define libvirt domain using xml: 
	I0927 00:15:45.575515   22923 main.go:141] libmachine: (addons-364775) <domain type='kvm'>
	I0927 00:15:45.575525   22923 main.go:141] libmachine: (addons-364775)   <name>addons-364775</name>
	I0927 00:15:45.575532   22923 main.go:141] libmachine: (addons-364775)   <memory unit='MiB'>4000</memory>
	I0927 00:15:45.575541   22923 main.go:141] libmachine: (addons-364775)   <vcpu>2</vcpu>
	I0927 00:15:45.575545   22923 main.go:141] libmachine: (addons-364775)   <features>
	I0927 00:15:45.575552   22923 main.go:141] libmachine: (addons-364775)     <acpi/>
	I0927 00:15:45.575556   22923 main.go:141] libmachine: (addons-364775)     <apic/>
	I0927 00:15:45.575560   22923 main.go:141] libmachine: (addons-364775)     <pae/>
	I0927 00:15:45.575566   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.575571   22923 main.go:141] libmachine: (addons-364775)   </features>
	I0927 00:15:45.575576   22923 main.go:141] libmachine: (addons-364775)   <cpu mode='host-passthrough'>
	I0927 00:15:45.575582   22923 main.go:141] libmachine: (addons-364775)   
	I0927 00:15:45.575591   22923 main.go:141] libmachine: (addons-364775)   </cpu>
	I0927 00:15:45.575601   22923 main.go:141] libmachine: (addons-364775)   <os>
	I0927 00:15:45.575614   22923 main.go:141] libmachine: (addons-364775)     <type>hvm</type>
	I0927 00:15:45.575634   22923 main.go:141] libmachine: (addons-364775)     <boot dev='cdrom'/>
	I0927 00:15:45.575652   22923 main.go:141] libmachine: (addons-364775)     <boot dev='hd'/>
	I0927 00:15:45.575681   22923 main.go:141] libmachine: (addons-364775)     <bootmenu enable='no'/>
	I0927 00:15:45.575702   22923 main.go:141] libmachine: (addons-364775)   </os>
	I0927 00:15:45.575714   22923 main.go:141] libmachine: (addons-364775)   <devices>
	I0927 00:15:45.575723   22923 main.go:141] libmachine: (addons-364775)     <disk type='file' device='cdrom'>
	I0927 00:15:45.575750   22923 main.go:141] libmachine: (addons-364775)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/boot2docker.iso'/>
	I0927 00:15:45.575762   22923 main.go:141] libmachine: (addons-364775)       <target dev='hdc' bus='scsi'/>
	I0927 00:15:45.575772   22923 main.go:141] libmachine: (addons-364775)       <readonly/>
	I0927 00:15:45.575786   22923 main.go:141] libmachine: (addons-364775)     </disk>
	I0927 00:15:45.575799   22923 main.go:141] libmachine: (addons-364775)     <disk type='file' device='disk'>
	I0927 00:15:45.575811   22923 main.go:141] libmachine: (addons-364775)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:15:45.575825   22923 main.go:141] libmachine: (addons-364775)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/addons-364775.rawdisk'/>
	I0927 00:15:45.575836   22923 main.go:141] libmachine: (addons-364775)       <target dev='hda' bus='virtio'/>
	I0927 00:15:45.575845   22923 main.go:141] libmachine: (addons-364775)     </disk>
	I0927 00:15:45.575855   22923 main.go:141] libmachine: (addons-364775)     <interface type='network'>
	I0927 00:15:45.575866   22923 main.go:141] libmachine: (addons-364775)       <source network='mk-addons-364775'/>
	I0927 00:15:45.575877   22923 main.go:141] libmachine: (addons-364775)       <model type='virtio'/>
	I0927 00:15:45.575888   22923 main.go:141] libmachine: (addons-364775)     </interface>
	I0927 00:15:45.575896   22923 main.go:141] libmachine: (addons-364775)     <interface type='network'>
	I0927 00:15:45.575909   22923 main.go:141] libmachine: (addons-364775)       <source network='default'/>
	I0927 00:15:45.575924   22923 main.go:141] libmachine: (addons-364775)       <model type='virtio'/>
	I0927 00:15:45.575936   22923 main.go:141] libmachine: (addons-364775)     </interface>
	I0927 00:15:45.575946   22923 main.go:141] libmachine: (addons-364775)     <serial type='pty'>
	I0927 00:15:45.575957   22923 main.go:141] libmachine: (addons-364775)       <target port='0'/>
	I0927 00:15:45.575966   22923 main.go:141] libmachine: (addons-364775)     </serial>
	I0927 00:15:45.575977   22923 main.go:141] libmachine: (addons-364775)     <console type='pty'>
	I0927 00:15:45.575996   22923 main.go:141] libmachine: (addons-364775)       <target type='serial' port='0'/>
	I0927 00:15:45.576007   22923 main.go:141] libmachine: (addons-364775)     </console>
	I0927 00:15:45.576016   22923 main.go:141] libmachine: (addons-364775)     <rng model='virtio'>
	I0927 00:15:45.576028   22923 main.go:141] libmachine: (addons-364775)       <backend model='random'>/dev/random</backend>
	I0927 00:15:45.576035   22923 main.go:141] libmachine: (addons-364775)     </rng>
	I0927 00:15:45.576045   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.576056   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.576064   22923 main.go:141] libmachine: (addons-364775)   </devices>
	I0927 00:15:45.576075   22923 main.go:141] libmachine: (addons-364775) </domain>
	I0927 00:15:45.576084   22923 main.go:141] libmachine: (addons-364775) 
	I0927 00:15:45.581822   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:be:33:ab in network default
	I0927 00:15:45.582377   22923 main.go:141] libmachine: (addons-364775) Ensuring networks are active...
	I0927 00:15:45.582391   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:45.583142   22923 main.go:141] libmachine: (addons-364775) Ensuring network default is active
	I0927 00:15:45.583582   22923 main.go:141] libmachine: (addons-364775) Ensuring network mk-addons-364775 is active
	I0927 00:15:45.584264   22923 main.go:141] libmachine: (addons-364775) Getting domain xml...
	I0927 00:15:45.585015   22923 main.go:141] libmachine: (addons-364775) Creating domain...
	I0927 00:15:46.949358   22923 main.go:141] libmachine: (addons-364775) Waiting to get IP...
	I0927 00:15:46.950076   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:46.950580   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:46.950607   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:46.950544   22945 retry.go:31] will retry after 202.642864ms: waiting for machine to come up
	I0927 00:15:47.155069   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.155563   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.155584   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.155427   22945 retry.go:31] will retry after 370.186358ms: waiting for machine to come up
	I0927 00:15:47.526779   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.527165   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.527193   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.527118   22945 retry.go:31] will retry after 435.004567ms: waiting for machine to come up
	I0927 00:15:47.963669   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.964030   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.964059   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.963977   22945 retry.go:31] will retry after 546.011839ms: waiting for machine to come up
	I0927 00:15:48.511601   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:48.512026   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:48.512071   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:48.511990   22945 retry.go:31] will retry after 469.054965ms: waiting for machine to come up
	I0927 00:15:48.982621   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:48.982989   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:48.983018   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:48.982935   22945 retry.go:31] will retry after 651.072969ms: waiting for machine to come up
	I0927 00:15:49.635407   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:49.635833   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:49.635868   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:49.635780   22945 retry.go:31] will retry after 787.572834ms: waiting for machine to come up
	I0927 00:15:50.425318   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:50.425646   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:50.425674   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:50.425607   22945 retry.go:31] will retry after 1.14927096s: waiting for machine to come up
	I0927 00:15:51.576285   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:51.576584   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:51.576610   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:51.576552   22945 retry.go:31] will retry after 1.476584274s: waiting for machine to come up
	I0927 00:15:53.055137   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:53.055575   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:53.055599   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:53.055538   22945 retry.go:31] will retry after 1.729538445s: waiting for machine to come up
	I0927 00:15:54.786058   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:54.786491   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:54.786519   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:54.786450   22945 retry.go:31] will retry after 2.631307121s: waiting for machine to come up
	I0927 00:15:57.421088   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:57.421427   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:57.421454   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:57.421379   22945 retry.go:31] will retry after 2.652911492s: waiting for machine to come up
	I0927 00:16:00.075506   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:00.075951   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:16:00.075981   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:16:00.075893   22945 retry.go:31] will retry after 3.30922874s: waiting for machine to come up
	I0927 00:16:03.388283   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:03.388607   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:16:03.388628   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:16:03.388576   22945 retry.go:31] will retry after 3.510064019s: waiting for machine to come up
	I0927 00:16:06.901968   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.902384   22923 main.go:141] libmachine: (addons-364775) Found IP for machine: 192.168.39.169
	I0927 00:16:06.902410   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has current primary IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.902418   22923 main.go:141] libmachine: (addons-364775) Reserving static IP address...
	I0927 00:16:06.902791   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find host DHCP lease matching {name: "addons-364775", mac: "52:54:00:e5:bb:bf", ip: "192.168.39.169"} in network mk-addons-364775
	I0927 00:16:06.970142   22923 main.go:141] libmachine: (addons-364775) Reserved static IP address: 192.168.39.169
	I0927 00:16:06.970170   22923 main.go:141] libmachine: (addons-364775) Waiting for SSH to be available...
	I0927 00:16:06.970179   22923 main.go:141] libmachine: (addons-364775) DBG | Getting to WaitForSSH function...
	I0927 00:16:06.972291   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.972697   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:06.972723   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.972887   22923 main.go:141] libmachine: (addons-364775) DBG | Using SSH client type: external
	I0927 00:16:06.972906   22923 main.go:141] libmachine: (addons-364775) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa (-rw-------)
	I0927 00:16:06.972933   22923 main.go:141] libmachine: (addons-364775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:16:06.972951   22923 main.go:141] libmachine: (addons-364775) DBG | About to run SSH command:
	I0927 00:16:06.972962   22923 main.go:141] libmachine: (addons-364775) DBG | exit 0
	I0927 00:16:07.103385   22923 main.go:141] libmachine: (addons-364775) DBG | SSH cmd err, output: <nil>: 
	I0927 00:16:07.103681   22923 main.go:141] libmachine: (addons-364775) KVM machine creation complete!
	I0927 00:16:07.103911   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:16:07.104438   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:07.104611   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:07.104753   22923 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:16:07.104765   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:07.105844   22923 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:16:07.105857   22923 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:16:07.105862   22923 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:16:07.105867   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.107901   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.108215   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.108246   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.108338   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.108493   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.108634   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.108761   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.108901   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.109070   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.109080   22923 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:16:07.218435   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:16:07.218469   22923 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:16:07.218478   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.221204   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.221494   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.221517   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.221683   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.221860   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.222017   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.222134   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.222276   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.222428   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.222439   22923 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:16:07.332074   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:16:07.332151   22923 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:16:07.332158   22923 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:16:07.332165   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.332377   22923 buildroot.go:166] provisioning hostname "addons-364775"
	I0927 00:16:07.332406   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.332594   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.334888   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.335193   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.335220   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.335325   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.335483   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.335621   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.335776   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.335956   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.336121   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.336143   22923 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-364775 && echo "addons-364775" | sudo tee /etc/hostname
	I0927 00:16:07.457193   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-364775
	
	I0927 00:16:07.457219   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.459657   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.459964   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.459992   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.460170   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.460303   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.460415   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.460529   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.460689   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.460874   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.460892   22923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-364775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-364775/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-364775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:16:07.576205   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:16:07.576252   22923 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:16:07.576312   22923 buildroot.go:174] setting up certificates
	I0927 00:16:07.576329   22923 provision.go:84] configureAuth start
	I0927 00:16:07.576347   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.576623   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:07.579617   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.579974   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.580000   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.580131   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.582401   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.582745   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.582770   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.582903   22923 provision.go:143] copyHostCerts
	I0927 00:16:07.582979   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:16:07.583120   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:16:07.583203   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:16:07.583299   22923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.addons-364775 san=[127.0.0.1 192.168.39.169 addons-364775 localhost minikube]
	I0927 00:16:07.704457   22923 provision.go:177] copyRemoteCerts
	I0927 00:16:07.704522   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:16:07.704551   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.707097   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.707455   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.707485   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.707628   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.707808   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.707921   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.708037   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:07.793441   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:16:07.816635   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:16:07.839412   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:16:07.861848   22923 provision.go:87] duration metric: took 285.503545ms to configureAuth
	I0927 00:16:07.861873   22923 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:16:07.862050   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:07.862134   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.864754   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.865082   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.865107   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.865293   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.865475   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.865626   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.865739   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.865871   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.866074   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.866090   22923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:16:08.093802   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:16:08.093837   22923 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:16:08.093848   22923 main.go:141] libmachine: (addons-364775) Calling .GetURL
	I0927 00:16:08.095002   22923 main.go:141] libmachine: (addons-364775) DBG | Using libvirt version 6000000
	I0927 00:16:08.097051   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.097385   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.097422   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.097515   22923 main.go:141] libmachine: Docker is up and running!
	I0927 00:16:08.097527   22923 main.go:141] libmachine: Reticulating splines...
	I0927 00:16:08.097535   22923 client.go:171] duration metric: took 23.479752106s to LocalClient.Create
	I0927 00:16:08.097566   22923 start.go:167] duration metric: took 23.479821174s to libmachine.API.Create "addons-364775"
	I0927 00:16:08.097589   22923 start.go:293] postStartSetup for "addons-364775" (driver="kvm2")
	I0927 00:16:08.097606   22923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:16:08.097627   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.097833   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:16:08.097854   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.099703   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.099981   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.100006   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.100126   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.100298   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.100435   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.100561   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.186017   22923 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:16:08.190011   22923 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:16:08.190031   22923 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:16:08.190101   22923 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:16:08.190129   22923 start.go:296] duration metric: took 92.527439ms for postStartSetup
	I0927 00:16:08.190155   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:16:08.190759   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:08.193058   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.193355   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.193381   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.193557   22923 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json ...
	I0927 00:16:08.193708   22923 start.go:128] duration metric: took 23.593238722s to createHost
	I0927 00:16:08.193728   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.195773   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.196120   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.196166   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.196300   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.196468   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.196582   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.196721   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.196856   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:08.197036   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:08.197048   22923 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:16:08.303996   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727396168.279190965
	
	I0927 00:16:08.304020   22923 fix.go:216] guest clock: 1727396168.279190965
	I0927 00:16:08.304027   22923 fix.go:229] Guest: 2024-09-27 00:16:08.279190965 +0000 UTC Remote: 2024-09-27 00:16:08.193719171 +0000 UTC m=+23.688310296 (delta=85.471794ms)
	I0927 00:16:08.304044   22923 fix.go:200] guest clock delta is within tolerance: 85.471794ms
	I0927 00:16:08.304048   22923 start.go:83] releasing machines lock for "addons-364775", held for 23.703640756s
	I0927 00:16:08.304069   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.304317   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:08.306988   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.307381   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.307407   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.307561   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.307997   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.308150   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.308237   22923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:16:08.308288   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.308351   22923 ssh_runner.go:195] Run: cat /version.json
	I0927 00:16:08.308378   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.310668   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.310969   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.310997   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.311014   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.311153   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.311324   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.311389   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.311408   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.311461   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.311590   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.311614   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.311722   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.311824   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.311953   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.388567   22923 ssh_runner.go:195] Run: systemctl --version
	I0927 00:16:08.413004   22923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:16:08.574576   22923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:16:08.581322   22923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:16:08.581391   22923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:16:08.597487   22923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:16:08.597509   22923 start.go:495] detecting cgroup driver to use...
	I0927 00:16:08.597566   22923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:16:08.612247   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:16:08.625077   22923 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:16:08.625130   22923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:16:08.637473   22923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:16:08.650051   22923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:16:08.758188   22923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:16:08.913236   22923 docker.go:233] disabling docker service ...
	I0927 00:16:08.913320   22923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:16:08.927426   22923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:16:08.940272   22923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:16:09.057168   22923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:16:09.169370   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:16:09.184123   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:16:09.202228   22923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:16:09.202290   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.212677   22923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:16:09.212740   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.223105   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.233431   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.243818   22923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:16:09.254480   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.265026   22923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.282615   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.293542   22923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:16:09.303356   22923 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:16:09.303424   22923 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:16:09.315981   22923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:16:09.325606   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:09.439247   22923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:16:09.527367   22923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:16:09.527468   22923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:16:09.532165   22923 start.go:563] Will wait 60s for crictl version
	I0927 00:16:09.532216   22923 ssh_runner.go:195] Run: which crictl
	I0927 00:16:09.535820   22923 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:16:09.572264   22923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:16:09.572401   22923 ssh_runner.go:195] Run: crio --version
	I0927 00:16:09.599589   22923 ssh_runner.go:195] Run: crio --version
	I0927 00:16:09.627068   22923 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:16:09.628232   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:09.630667   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:09.630995   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:09.631023   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:09.631180   22923 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:16:09.635187   22923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:16:09.647618   22923 kubeadm.go:883] updating cluster {Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:16:09.647751   22923 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:16:09.647799   22923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:16:09.680511   22923 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 00:16:09.680588   22923 ssh_runner.go:195] Run: which lz4
	I0927 00:16:09.684511   22923 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 00:16:09.688651   22923 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 00:16:09.688692   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 00:16:10.959682   22923 crio.go:462] duration metric: took 1.275200656s to copy over tarball
	I0927 00:16:10.959746   22923 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 00:16:13.025278   22923 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.065510814s)
	I0927 00:16:13.025311   22923 crio.go:469] duration metric: took 2.065601709s to extract the tarball
	I0927 00:16:13.025322   22923 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 00:16:13.061932   22923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:16:13.107912   22923 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:16:13.107939   22923 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:16:13.107947   22923 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.1 crio true true} ...
	I0927 00:16:13.108033   22923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-364775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:16:13.108095   22923 ssh_runner.go:195] Run: crio config
	I0927 00:16:13.153533   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:16:13.153555   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:16:13.153566   22923 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:16:13.153586   22923 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-364775 NodeName:addons-364775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:16:13.153691   22923 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-364775"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:16:13.153746   22923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:16:13.163635   22923 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:16:13.163702   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:16:13.172959   22923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 00:16:13.190510   22923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:16:13.207214   22923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0927 00:16:13.224712   22923 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I0927 00:16:13.228436   22923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:16:13.241465   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:13.367179   22923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:16:13.383473   22923 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775 for IP: 192.168.39.169
	I0927 00:16:13.383499   22923 certs.go:194] generating shared ca certs ...
	I0927 00:16:13.383515   22923 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.383652   22923 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:16:13.575678   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt ...
	I0927 00:16:13.575704   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt: {Name:mk3ad08ac2703aff467792f34abbf756e11c2872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.575901   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key ...
	I0927 00:16:13.575916   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key: {Name:mkab43d698e5658555844624b3079e901a8ded86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.576010   22923 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:16:13.751373   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt ...
	I0927 00:16:13.751404   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt: {Name:mk8e225d38c1311b0e8a7348aa1fbee6e6fcbd70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.751579   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key ...
	I0927 00:16:13.751594   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key: {Name:mk81ac2481482dece22299e0ff67c97675fb9f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.751685   22923 certs.go:256] generating profile certs ...
	I0927 00:16:13.751745   22923 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key
	I0927 00:16:13.751759   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt with IP's: []
	I0927 00:16:13.996696   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt ...
	I0927 00:16:13.996728   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: {Name:mk4647826e81f09b562e4b6468be9da247fcab9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.996908   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key ...
	I0927 00:16:13.996922   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key: {Name:mkdba807b5f103e151ba37e1747e2a749b1980c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.997015   22923 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee
	I0927 00:16:13.997035   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.169]
	I0927 00:16:14.144098   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee ...
	I0927 00:16:14.144127   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee: {Name:mkf743df3d4ae64c9bb8f8a6ebe4e814cf609961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.144305   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee ...
	I0927 00:16:14.144321   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee: {Name:mk43e7a262458556d97385e524b4828b4b015bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.144397   22923 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt
	I0927 00:16:14.144467   22923 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key
	I0927 00:16:14.144516   22923 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key
	I0927 00:16:14.144533   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt with IP's: []
	I0927 00:16:14.217209   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt ...
	I0927 00:16:14.217236   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt: {Name:mk44b3f8e9e129ec5865925167df941ba0f63291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.217379   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key ...
	I0927 00:16:14.217389   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key: {Name:mkc2dd610a10002245981e0f1a9de7854a330937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.217536   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:16:14.217567   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:16:14.217589   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:16:14.217611   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:16:14.218138   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:16:14.245205   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:16:14.273590   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:16:14.299930   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:16:14.322526   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:16:14.345010   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:16:14.368388   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:16:14.391414   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:16:14.413864   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:16:14.435858   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:16:14.451548   22923 ssh_runner.go:195] Run: openssl version
	I0927 00:16:14.457242   22923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:16:14.467943   22923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.472191   22923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.472238   22923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.477640   22923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:16:14.488010   22923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:16:14.491811   22923 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:16:14.491855   22923 kubeadm.go:392] StartCluster: {Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:16:14.491924   22923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:16:14.491960   22923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:16:14.524680   22923 cri.go:89] found id: ""
	I0927 00:16:14.524743   22923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:16:14.534145   22923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:16:14.545428   22923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:16:14.556318   22923 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:16:14.556338   22923 kubeadm.go:157] found existing configuration files:
	
	I0927 00:16:14.556375   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:16:14.566224   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:16:14.566269   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:16:14.576303   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:16:14.585129   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:16:14.585171   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:16:14.594747   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:16:14.603457   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:16:14.603496   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:16:14.612663   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:16:14.621624   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:16:14.621668   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:16:14.631182   22923 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 00:16:14.689680   22923 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:16:14.689907   22923 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:16:14.787642   22923 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:16:14.787844   22923 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:16:14.787981   22923 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:16:14.796210   22923 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:16:14.933571   22923 out.go:235]   - Generating certificates and keys ...
	I0927 00:16:14.933713   22923 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:16:14.933803   22923 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:16:14.933906   22923 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:16:15.129675   22923 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:16:15.193399   22923 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:16:15.313134   22923 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:16:15.654187   22923 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:16:15.654296   22923 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-364775 localhost] and IPs [192.168.39.169 127.0.0.1 ::1]
	I0927 00:16:15.765696   22923 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:16:15.765874   22923 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-364775 localhost] and IPs [192.168.39.169 127.0.0.1 ::1]
	I0927 00:16:16.013868   22923 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:16:16.165681   22923 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:16:16.447703   22923 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:16:16.447794   22923 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:16:16.592680   22923 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:16:16.720016   22923 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:16:16.929585   22923 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:16:17.262835   22923 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:16:17.402806   22923 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:16:17.403246   22923 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:16:17.407265   22923 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:16:17.409098   22923 out.go:235]   - Booting up control plane ...
	I0927 00:16:17.409215   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:16:17.409290   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:16:17.410016   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:16:17.425105   22923 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:16:17.433605   22923 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:16:17.433674   22923 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:16:17.565381   22923 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:16:17.565569   22923 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:16:19.065179   22923 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501169114s
	I0927 00:16:19.065301   22923 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:16:24.064418   22923 kubeadm.go:310] [api-check] The API server is healthy after 5.001577374s
	I0927 00:16:24.076690   22923 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:16:24.099966   22923 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:16:24.127484   22923 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:16:24.127678   22923 kubeadm.go:310] [mark-control-plane] Marking the node addons-364775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:16:24.140308   22923 kubeadm.go:310] [bootstrap-token] Using token: pa4b34.sdki52w2nqhs0c2a
	I0927 00:16:24.141673   22923 out.go:235]   - Configuring RBAC rules ...
	I0927 00:16:24.141825   22923 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:16:24.147166   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:16:24.155898   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:16:24.161743   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:16:24.165824   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:16:24.168837   22923 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:16:24.472788   22923 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:16:24.898245   22923 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:16:25.470513   22923 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:16:25.471447   22923 kubeadm.go:310] 
	I0927 00:16:25.471556   22923 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:16:25.471575   22923 kubeadm.go:310] 
	I0927 00:16:25.471666   22923 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:16:25.471676   22923 kubeadm.go:310] 
	I0927 00:16:25.471699   22923 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:16:25.471877   22923 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:16:25.471929   22923 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:16:25.471935   22923 kubeadm.go:310] 
	I0927 00:16:25.471976   22923 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:16:25.471982   22923 kubeadm.go:310] 
	I0927 00:16:25.472038   22923 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:16:25.472051   22923 kubeadm.go:310] 
	I0927 00:16:25.472141   22923 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:16:25.472326   22923 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:16:25.472450   22923 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:16:25.472464   22923 kubeadm.go:310] 
	I0927 00:16:25.472573   22923 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:16:25.472648   22923 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:16:25.472666   22923 kubeadm.go:310] 
	I0927 00:16:25.472805   22923 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pa4b34.sdki52w2nqhs0c2a \
	I0927 00:16:25.472942   22923 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 00:16:25.472971   22923 kubeadm.go:310] 	--control-plane 
	I0927 00:16:25.472980   22923 kubeadm.go:310] 
	I0927 00:16:25.473098   22923 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:16:25.473107   22923 kubeadm.go:310] 
	I0927 00:16:25.473226   22923 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pa4b34.sdki52w2nqhs0c2a \
	I0927 00:16:25.473365   22923 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 00:16:25.474005   22923 kubeadm.go:310] W0927 00:16:14.668581     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:16:25.474358   22923 kubeadm.go:310] W0927 00:16:14.670545     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:16:25.474505   22923 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:16:25.474538   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:16:25.474550   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:16:25.476900   22923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 00:16:25.477915   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 00:16:25.488407   22923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 00:16:25.508648   22923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:16:25.508704   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:25.508750   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-364775 minikube.k8s.io/updated_at=2024_09_27T00_16_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-364775 minikube.k8s.io/primary=true
	I0927 00:16:25.526229   22923 ops.go:34] apiserver oom_adj: -16
	I0927 00:16:25.629503   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:26.130228   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:26.629915   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:27.130024   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:27.630537   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:28.130314   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:28.630463   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:29.130429   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:29.630477   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:30.129687   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:30.257341   22923 kubeadm.go:1113] duration metric: took 4.748689071s to wait for elevateKubeSystemPrivileges
	I0927 00:16:30.257376   22923 kubeadm.go:394] duration metric: took 15.765523535s to StartCluster
	I0927 00:16:30.257393   22923 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:30.257497   22923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:16:30.257927   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:30.258123   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:16:30.258153   22923 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:16:30.258207   22923 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:16:30.258332   22923 addons.go:69] Setting yakd=true in profile "addons-364775"
	I0927 00:16:30.258343   22923 addons.go:69] Setting metrics-server=true in profile "addons-364775"
	I0927 00:16:30.258356   22923 addons.go:234] Setting addon yakd=true in "addons-364775"
	I0927 00:16:30.258357   22923 addons.go:69] Setting storage-provisioner=true in profile "addons-364775"
	I0927 00:16:30.258336   22923 addons.go:69] Setting cloud-spanner=true in profile "addons-364775"
	I0927 00:16:30.258373   22923 addons.go:234] Setting addon storage-provisioner=true in "addons-364775"
	I0927 00:16:30.258378   22923 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-364775"
	I0927 00:16:30.258389   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258398   22923 addons.go:69] Setting ingress=true in profile "addons-364775"
	I0927 00:16:30.258398   22923 addons.go:69] Setting default-storageclass=true in profile "addons-364775"
	I0927 00:16:30.258418   22923 addons.go:69] Setting registry=true in profile "addons-364775"
	I0927 00:16:30.258421   22923 addons.go:69] Setting ingress-dns=true in profile "addons-364775"
	I0927 00:16:30.258424   22923 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-364775"
	I0927 00:16:30.258430   22923 addons.go:234] Setting addon registry=true in "addons-364775"
	I0927 00:16:30.258431   22923 addons.go:234] Setting addon ingress-dns=true in "addons-364775"
	I0927 00:16:30.258439   22923 addons.go:69] Setting volcano=true in profile "addons-364775"
	I0927 00:16:30.258444   22923 addons.go:69] Setting inspektor-gadget=true in profile "addons-364775"
	I0927 00:16:30.258449   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258453   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258461   22923 addons.go:234] Setting addon inspektor-gadget=true in "addons-364775"
	I0927 00:16:30.258460   22923 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-364775"
	I0927 00:16:30.258465   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258475   22923 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-364775"
	I0927 00:16:30.258499   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258390   22923 addons.go:234] Setting addon cloud-spanner=true in "addons-364775"
	I0927 00:16:30.258875   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258880   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258887   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258897   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258901   22923 addons.go:69] Setting volumesnapshots=true in profile "addons-364775"
	I0927 00:16:30.258904   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258890   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258911   22923 addons.go:234] Setting addon volumesnapshots=true in "addons-364775"
	I0927 00:16:30.258428   22923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-364775"
	I0927 00:16:30.258921   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258928   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258400   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.259165   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:30.259243   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259268   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258361   22923 addons.go:234] Setting addon metrics-server=true in "addons-364775"
	I0927 00:16:30.259320   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258902   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259250   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259345   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259362   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258453   22923 addons.go:234] Setting addon volcano=true in "addons-364775"
	I0927 00:16:30.258410   22923 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-364775"
	I0927 00:16:30.258413   22923 addons.go:234] Setting addon ingress=true in "addons-364775"
	I0927 00:16:30.259414   22923 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-364775"
	I0927 00:16:30.259433   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258910   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259599   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259320   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259681   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259711   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259756   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259785   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258365   22923 addons.go:69] Setting gcp-auth=true in profile "addons-364775"
	I0927 00:16:30.259994   22923 mustload.go:65] Loading cluster: addons-364775
	I0927 00:16:30.258890   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.260064   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259324   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.260148   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.260171   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:30.260185   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.259686   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.260638   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.261042   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.261076   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.261496   22923 out.go:177] * Verifying Kubernetes components...
	I0927 00:16:30.263120   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:30.279959   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33325
	I0927 00:16:30.280209   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0927 00:16:30.280226   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0927 00:16:30.280238   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0927 00:16:30.280556   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.280907   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281016   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281058   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281074   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281083   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281341   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281358   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281459   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.281511   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281523   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281582   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281595   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281715   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.281946   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.281986   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.282089   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.282113   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.282682   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.282737   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.295448   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
	I0927 00:16:30.295465   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0927 00:16:30.295466   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.295577   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0927 00:16:30.295763   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.295797   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.295961   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.295991   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.296110   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.296144   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.297516   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.297610   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.297662   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.298165   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298183   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298204   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298220   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298319   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298333   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298708   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.298770   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.299230   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.299374   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.299396   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.299799   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.299837   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.320873   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0927 00:16:30.321467   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.322017   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.322035   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.322375   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.322557   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.324241   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.325648   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0927 00:16:30.326669   22923 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:16:30.328052   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:16:30.328068   22923 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:16:30.328087   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.330977   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.331478   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.331497   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.331615   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.331743   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0927 00:16:30.331928   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.331988   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.332189   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.332466   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.332484   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.332544   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.332815   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.333331   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.333369   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.333610   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0927 00:16:30.334115   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.334676   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.334692   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.334922   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0927 00:16:30.335061   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.335224   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.335329   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.335871   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.335915   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.337783   22923 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-364775"
	I0927 00:16:30.337824   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.338180   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.338211   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.341852   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38853
	I0927 00:16:30.341872   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.341955   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.341960   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0927 00:16:30.341962   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.341971   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.342027   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0927 00:16:30.342336   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.342379   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.342477   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343236   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343339   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34911
	I0927 00:16:30.343344   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.343360   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.343418   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343490   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.343875   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.343889   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.344011   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.344032   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.344084   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.344875   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0927 00:16:30.344918   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.344961   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.344877   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.345471   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.345494   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.345704   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.345804   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.345923   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.345934   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346060   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.346070   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346180   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.346193   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346254   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346296   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346481   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346533   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.346738   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346786   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.346944   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.347106   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.347423   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.347470   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.347990   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.348013   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.348711   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.348979   22923 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:16:30.350262   22923 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:16:30.350711   22923 addons.go:234] Setting addon default-storageclass=true in "addons-364775"
	I0927 00:16:30.350752   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.351080   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.351116   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.351946   22923 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:16:30.351964   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:16:30.351981   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.352035   22923 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:16:30.353597   22923 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:16:30.353615   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:16:30.353635   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.354349   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0927 00:16:30.354872   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.355446   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.355462   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.355832   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.356428   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.356465   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.356580   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.357770   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.357938   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.357955   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.358350   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.358661   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0927 00:16:30.358801   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.358854   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.358868   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.359073   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.359151   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.359281   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.359652   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.359671   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.359714   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.360052   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.360131   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.360290   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.360338   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.360850   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.361885   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.364200   22923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:16:30.365464   22923 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:16:30.365488   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:16:30.365507   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.366308   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0927 00:16:30.366791   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.367379   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.367401   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.367750   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.367938   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.369060   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.369690   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.369710   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.370066   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.370129   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.370398   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.370694   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.370823   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.371120   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0927 00:16:30.371610   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.372218   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.372236   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.372530   22923 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:16:30.373808   22923 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:16:30.373825   22923 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:16:30.373842   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.373856   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I0927 00:16:30.374333   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.374903   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.374922   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.375279   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.375482   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.376723   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.377131   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.377149   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.377335   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.377382   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.377887   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.378054   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0927 00:16:30.378172   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.378338   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.378377   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.378649   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.378704   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.379547   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:16:30.379740   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.379756   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.380077   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.380239   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.380301   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.380762   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:16:30.380786   22923 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:16:30.380803   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.382146   22923 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:16:30.383631   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I0927 00:16:30.383639   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.383821   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:30.383832   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:30.383878   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.383963   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.384125   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.384142   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.384162   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:30.384182   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:30.384189   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:30.384196   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:30.384202   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:30.384331   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.384494   22923 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:16:30.384504   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:16:30.384518   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.384569   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:30.384585   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:30.384591   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	W0927 00:16:30.384653   22923 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0927 00:16:30.384914   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.385028   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.385170   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.385526   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.385545   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.386176   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.386427   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.388475   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0927 00:16:30.388774   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.389050   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.389164   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.389180   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.389505   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.389567   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.389583   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.390108   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.390148   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.390345   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.390534   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.390650   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.390712   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.390753   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.392001   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38025
	I0927 00:16:30.392318   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.392824   22923 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:16:30.392877   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.392887   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.393218   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.393656   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.393690   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.395193   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:16:30.395209   22923 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:16:30.395225   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.396435   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0927 00:16:30.396951   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.397552   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.397567   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.397947   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.398120   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.398753   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.399064   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.399083   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.399238   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.399500   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.399555   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
	I0927 00:16:30.399676   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.399899   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.400083   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.400154   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.400820   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.400837   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.401205   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.401221   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:16:30.401414   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.403906   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:30.404106   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34281
	I0927 00:16:30.404221   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.404635   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.404663   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
	I0927 00:16:30.405161   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.405182   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.405366   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.405583   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.405846   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.405996   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.406014   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.406064   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:16:30.406314   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.406621   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.406763   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:30.407772   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:16:30.408030   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.408199   22923 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:16:30.408220   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:16:30.408236   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.409701   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:16:30.409716   22923 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:16:30.411228   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:16:30.411370   22923 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:16:30.411387   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:16:30.411406   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.411488   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.411504   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.411531   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.411552   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.411643   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.411769   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.411918   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.413714   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:16:30.414595   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.415012   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.415065   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.415352   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.415527   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.415645   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.415756   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.416372   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:16:30.417721   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:16:30.418988   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:16:30.420195   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:16:30.420214   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:16:30.420244   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.422864   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0927 00:16:30.423200   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.423340   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.423691   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.423710   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.423879   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.424016   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.424026   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.424200   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.424330   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.424366   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.424489   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.424704   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.424757   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I0927 00:16:30.425411   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.425899   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.425917   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.426087   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.426195   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.426431   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.427706   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.427748   22923 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:16:30.427917   22923 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:16:30.427928   22923 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:16:30.427942   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.430541   22923 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:16:30.431106   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.431591   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.431613   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.431738   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.431872   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.431985   22923 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:16:30.431995   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:16:30.432008   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.432009   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.432127   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	W0927 00:16:30.434191   22923 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47452->192.168.39.169:22: read: connection reset by peer
	I0927 00:16:30.434217   22923 retry.go:31] will retry after 235.279035ms: ssh: handshake failed: read tcp 192.168.39.1:47452->192.168.39.169:22: read: connection reset by peer
	I0927 00:16:30.434586   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.435008   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.435093   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.435225   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.435381   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.435528   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.435630   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.687382   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:16:30.687407   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:16:30.703808   22923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:16:30.703964   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:16:30.766082   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:16:30.766106   22923 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:16:30.789375   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:16:30.789397   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:16:30.817986   22923 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:16:30.818010   22923 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:16:30.818453   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:16:30.818687   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:16:30.820723   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:16:30.820738   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:16:30.838202   22923 node_ready.go:35] waiting up to 6m0s for node "addons-364775" to be "Ready" ...
	I0927 00:16:30.841116   22923 node_ready.go:49] node "addons-364775" has status "Ready":"True"
	I0927 00:16:30.841135   22923 node_ready.go:38] duration metric: took 2.9055ms for node "addons-364775" to be "Ready" ...
	I0927 00:16:30.841142   22923 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:16:30.845387   22923 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:16:30.845426   22923 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:16:30.848404   22923 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:30.890816   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:16:30.919824   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:16:30.919846   22923 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:16:30.923045   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:16:30.930150   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:16:30.969174   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:16:30.986771   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:16:30.986796   22923 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:16:31.024820   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:16:31.024848   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:16:31.048974   22923 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:16:31.048999   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:16:31.060405   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:16:31.060436   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:16:31.087170   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:16:31.097441   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:16:31.097468   22923 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:16:31.123704   22923 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:16:31.123728   22923 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:16:31.127243   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:16:31.127257   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:16:31.181768   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:16:31.181799   22923 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:16:31.198013   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:16:31.198040   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:16:31.230188   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:16:31.240969   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:16:31.337457   22923 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:16:31.337486   22923 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:16:31.340360   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:16:31.340378   22923 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:16:31.357490   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:16:31.357519   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:16:31.438275   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:16:31.438302   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:16:31.479034   22923 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:31.479054   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:16:31.506932   22923 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:16:31.506952   22923 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:16:31.551476   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:16:31.551508   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:16:31.628698   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:16:31.817687   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:31.844064   22923 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:16:31.844092   22923 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:16:32.141105   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:16:32.141141   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:16:32.314746   22923 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:16:32.314778   22923 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:16:32.430650   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:16:32.430679   22923 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:16:32.500643   22923 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:16:32.500669   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:16:32.618286   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:16:32.618306   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:16:32.776416   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:16:32.854014   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:32.980645   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:16:32.980665   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:16:32.984476   22923 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.280478347s)
	I0927 00:16:32.984507   22923 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 00:16:33.214546   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.396058946s)
	I0927 00:16:33.214590   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:33.214603   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:33.214847   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:33.214864   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:33.214872   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:33.214879   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:33.215068   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:33.215082   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:33.399888   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:16:33.399914   22923 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:16:33.488059   22923 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-364775" context rescaled to 1 replicas
	I0927 00:16:33.660690   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:16:35.195794   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:36.275637   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.456917037s)
	I0927 00:16:36.275696   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.275710   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.275974   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:36.275983   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.275997   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:36.276006   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.276024   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.276207   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.276219   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:36.365139   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.365161   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.365407   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:36.365451   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.365468   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.395951   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:37.431653   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:16:37.431693   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:37.434730   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.435197   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:37.435228   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.435424   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:37.435670   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:37.435829   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:37.436039   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:37.781071   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:16:37.864137   22923 addons.go:234] Setting addon gcp-auth=true in "addons-364775"
	I0927 00:16:37.864191   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:37.864599   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:37.864634   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:37.880453   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0927 00:16:37.881363   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:37.881837   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:37.881864   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:37.882238   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:37.882781   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:37.882817   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:37.897834   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0927 00:16:37.898272   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:37.898755   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:37.898780   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:37.899107   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:37.899270   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:37.900885   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:37.901107   22923 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:16:37.901127   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:37.903699   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.904060   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:37.904077   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.904235   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:37.904402   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:37.904533   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:37.904663   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:37.975730   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.084875146s)
	I0927 00:16:37.975779   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975780   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.05270093s)
	I0927 00:16:37.975818   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975836   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975874   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.006677684s)
	I0927 00:16:37.975909   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975920   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975923   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.888722405s)
	I0927 00:16:37.975952   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975969   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975818   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.045636678s)
	I0927 00:16:37.975983   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.745766683s)
	I0927 00:16:37.975995   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976002   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976007   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975792   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976021   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976074   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.735080002s)
	I0927 00:16:37.976097   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976107   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976192   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.347467613s)
	I0927 00:16:37.976207   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976215   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976527   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976558   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976566   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976571   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976580   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976582   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976587   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976601   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976608   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976615   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976613   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976622   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976647   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976654   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976663   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976666   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976672   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976684   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976691   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976698   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976704   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976738   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976754   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976760   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976797   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976808   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976838   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976846   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976854   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976859   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976875   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976886   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976894   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976901   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.977215   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.977240   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.977246   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.977255   22923 addons.go:475] Verifying addon ingress=true in "addons-364775"
	I0927 00:16:37.977437   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.977460   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.977465   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978150   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978180   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978194   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978192   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.978204   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978219   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.978225   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978361   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.979355   22923 out.go:177] * Verifying ingress addon...
	I0927 00:16:37.979514   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.979525   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.979768   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.979826   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.979832   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.979841   22923 addons.go:475] Verifying addon metrics-server=true in "addons-364775"
	I0927 00:16:37.980269   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980280   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.980288   22923 addons.go:475] Verifying addon registry=true in "addons-364775"
	I0927 00:16:37.980455   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980746   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.980760   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.980768   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.980971   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980987   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.981985   22923 out.go:177] * Verifying registry addon...
	I0927 00:16:37.981995   22923 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-364775 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:16:37.982403   22923 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:16:37.983991   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:16:38.027140   22923 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:16:38.027164   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:38.027861   22923 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:16:38.027884   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.131340   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.131369   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.131619   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.131639   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:38.551465   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:38.551901   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.905728   22923 pod_ready.go:93] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:38.905752   22923 pod_ready.go:82] duration metric: took 8.057329101s for pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:38.905762   22923 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:38.947750   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.130011838s)
	W0927 00:16:38.947809   22923 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:38.947833   22923 retry.go:31] will retry after 183.128394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:38.947854   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.171400863s)
	I0927 00:16:38.947898   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.947923   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.948190   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.948207   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:38.948218   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.948225   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.948480   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.948512   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.000059   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:39.000476   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.132046   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:39.490374   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:39.492989   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.801849   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.14111498s)
	I0927 00:16:39.801914   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:39.801915   22923 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.900787405s)
	I0927 00:16:39.801927   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:39.802242   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:39.802285   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.802305   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:39.802318   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:39.802316   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:39.802555   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:39.802569   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.802579   22923 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-364775"
	I0927 00:16:39.803411   22923 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:16:39.804344   22923 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:16:39.806163   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:39.806896   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:16:39.807410   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:16:39.807425   22923 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:16:39.870942   22923 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:16:39.870973   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:39.953858   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:16:39.953888   22923 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:16:39.987421   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.990568   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:40.013239   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:40.013265   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:16:40.054642   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:40.311779   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.487458   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:40.488947   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:40.708018   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.575916247s)
	I0927 00:16:40.708075   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:40.708093   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:40.708329   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:40.708410   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:40.708424   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:40.708437   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:40.708458   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:40.708681   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:40.708717   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:40.812167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.918341   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:41.015974   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.018484   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:41.070353   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.015656922s)
	I0927 00:16:41.070410   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:41.070421   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:41.070658   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:41.070675   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:41.070686   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:41.070694   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:41.070909   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:41.070942   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:41.072773   22923 addons.go:475] Verifying addon gcp-auth=true in "addons-364775"
	I0927 00:16:41.074260   22923 out.go:177] * Verifying gcp-auth addon...
	I0927 00:16:41.077101   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:16:41.089006   22923 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:16:41.089060   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:41.319255   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:41.489602   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.493367   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:41.589417   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:41.824980   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.009117   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.009383   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.097507   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:42.313572   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.412928   22923 pod_ready.go:98] pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.169 HostIPs:[{IP:192.168.39
.169}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:16:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:16:35 +0000 UTC,FinishedAt:2024-09-27 00:16:41 +0000 UTC,ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf Started:0xc0022776f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000a82370} {Name:kube-api-access-c6xps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000a82380}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:16:42.412956   22923 pod_ready.go:82] duration metric: took 3.507186728s for pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace to be "Ready" ...
	E0927 00:16:42.412968   22923 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.169 HostIPs:[{IP:192.168.39.169}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:16:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:16:35 +0000 UTC,FinishedAt:2024-09-27 00:16:41 +0000 UTC,ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf Started:0xc0022776f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000a82370} {Name:kube-api-access-c6xps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc000a82380}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:16:42.412977   22923 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.419963   22923 pod_ready.go:93] pod "etcd-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.419981   22923 pod_ready.go:82] duration metric: took 6.997345ms for pod "etcd-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.419989   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.437266   22923 pod_ready.go:93] pod "kube-apiserver-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.437286   22923 pod_ready.go:82] duration metric: took 17.290515ms for pod "kube-apiserver-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.437295   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.456989   22923 pod_ready.go:93] pod "kube-controller-manager-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.457011   22923 pod_ready.go:82] duration metric: took 19.710449ms for pod "kube-controller-manager-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.457022   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vj2cl" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.463096   22923 pod_ready.go:93] pod "kube-proxy-vj2cl" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.463112   22923 pod_ready.go:82] duration metric: took 6.084237ms for pod "kube-proxy-vj2cl" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.463120   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.487973   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.488283   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.581218   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:42.810423   22923 pod_ready.go:93] pod "kube-scheduler-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.810447   22923 pod_ready.go:82] duration metric: took 347.321728ms for pod "kube-scheduler-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.810454   22923 pod_ready.go:39] duration metric: took 11.969303463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:16:42.810469   22923 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:16:42.810514   22923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:16:42.814099   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.827884   22923 api_server.go:72] duration metric: took 12.569706035s to wait for apiserver process to appear ...
	I0927 00:16:42.827902   22923 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:16:42.827918   22923 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0927 00:16:42.835431   22923 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I0927 00:16:42.837096   22923 api_server.go:141] control plane version: v1.31.1
	I0927 00:16:42.837111   22923 api_server.go:131] duration metric: took 9.203783ms to wait for apiserver health ...
	I0927 00:16:42.837119   22923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:16:42.988500   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.988911   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.087346   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:43.092767   22923 system_pods.go:59] 17 kube-system pods found
	I0927 00:16:43.092791   22923 system_pods.go:61] "coredns-7c65d6cfc9-gd2h2" [4a9f1c5a-89df-497e-a9fa-4a5d427542c0] Running
	I0927 00:16:43.092800   22923 system_pods.go:61] "csi-hostpath-attacher-0" [c4a5feee-cdbf-4a8f-9ab2-d1e28526dc7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:43.092807   22923 system_pods.go:61] "csi-hostpath-resizer-0" [a9b843e4-fb3e-491a-90a1-05337ec1be6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:43.092815   22923 system_pods.go:61] "csi-hostpathplugin-5jvjw" [86b14d99-6d05-417f-834c-06b97d3ff358] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:43.092819   22923 system_pods.go:61] "etcd-addons-364775" [c4a11540-824b-46eb-b5ff-16761d78090b] Running
	I0927 00:16:43.092823   22923 system_pods.go:61] "kube-apiserver-addons-364775" [a34af223-8b21-4d2e-acc8-f35f72a84d89] Running
	I0927 00:16:43.092827   22923 system_pods.go:61] "kube-controller-manager-addons-364775" [d41167fe-9862-4644-a4a2-5891b829c263] Running
	I0927 00:16:43.092833   22923 system_pods.go:61] "kube-ingress-dns-minikube" [8bb056cc-4ad8-48da-bad9-aec78168a573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 00:16:43.092836   22923 system_pods.go:61] "kube-proxy-vj2cl" [f2579736-b094-4822-82ce-2ce53d815d92] Running
	I0927 00:16:43.092840   22923 system_pods.go:61] "kube-scheduler-addons-364775" [87532128-92ea-4e82-8f4b-e05bba39380d] Running
	I0927 00:16:43.092849   22923 system_pods.go:61] "metrics-server-84c5f94fbc-h74zz" [1ee23e82-6d41-48b5-a303-16f6ebd60172] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:43.092855   22923 system_pods.go:61] "nvidia-device-plugin-daemonset-gvjn8" [2de30fac-4d6c-4922-b784-e9801df8f16a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:16:43.092862   22923 system_pods.go:61] "registry-66c9cd494c-kdt5f" [652ee744-ff06-40fe-a66f-aabff5476e31] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:16:43.092867   22923 system_pods.go:61] "registry-proxy-2rlvs" [5080c804-a6a8-4239-bd3f-a89d8f114f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:16:43.092875   22923 system_pods.go:61] "snapshot-controller-56fcc65765-b777z" [beb5ceb2-51fe-49bc-842c-800de73b7628] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.092880   22923 system_pods.go:61] "snapshot-controller-56fcc65765-s5z9r" [ba81ccfa-12e1-42cd-a9f0-d1cbff990eb6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.092888   22923 system_pods.go:61] "storage-provisioner" [b2787e80-d152-46a1-9672-af83ebbb8e9d] Running
	I0927 00:16:43.092895   22923 system_pods.go:74] duration metric: took 255.770173ms to wait for pod list to return data ...
	I0927 00:16:43.092901   22923 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:16:43.209797   22923 default_sa.go:45] found service account: "default"
	I0927 00:16:43.209820   22923 default_sa.go:55] duration metric: took 116.910938ms for default service account to be created ...
	I0927 00:16:43.209828   22923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:16:43.311723   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.415743   22923 system_pods.go:86] 17 kube-system pods found
	I0927 00:16:43.415771   22923 system_pods.go:89] "coredns-7c65d6cfc9-gd2h2" [4a9f1c5a-89df-497e-a9fa-4a5d427542c0] Running
	I0927 00:16:43.415779   22923 system_pods.go:89] "csi-hostpath-attacher-0" [c4a5feee-cdbf-4a8f-9ab2-d1e28526dc7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:43.415785   22923 system_pods.go:89] "csi-hostpath-resizer-0" [a9b843e4-fb3e-491a-90a1-05337ec1be6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:43.415793   22923 system_pods.go:89] "csi-hostpathplugin-5jvjw" [86b14d99-6d05-417f-834c-06b97d3ff358] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:43.415798   22923 system_pods.go:89] "etcd-addons-364775" [c4a11540-824b-46eb-b5ff-16761d78090b] Running
	I0927 00:16:43.415803   22923 system_pods.go:89] "kube-apiserver-addons-364775" [a34af223-8b21-4d2e-acc8-f35f72a84d89] Running
	I0927 00:16:43.415807   22923 system_pods.go:89] "kube-controller-manager-addons-364775" [d41167fe-9862-4644-a4a2-5891b829c263] Running
	I0927 00:16:43.415813   22923 system_pods.go:89] "kube-ingress-dns-minikube" [8bb056cc-4ad8-48da-bad9-aec78168a573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 00:16:43.415817   22923 system_pods.go:89] "kube-proxy-vj2cl" [f2579736-b094-4822-82ce-2ce53d815d92] Running
	I0927 00:16:43.415824   22923 system_pods.go:89] "kube-scheduler-addons-364775" [87532128-92ea-4e82-8f4b-e05bba39380d] Running
	I0927 00:16:43.415829   22923 system_pods.go:89] "metrics-server-84c5f94fbc-h74zz" [1ee23e82-6d41-48b5-a303-16f6ebd60172] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:43.415837   22923 system_pods.go:89] "nvidia-device-plugin-daemonset-gvjn8" [2de30fac-4d6c-4922-b784-e9801df8f16a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:16:43.415842   22923 system_pods.go:89] "registry-66c9cd494c-kdt5f" [652ee744-ff06-40fe-a66f-aabff5476e31] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:16:43.415848   22923 system_pods.go:89] "registry-proxy-2rlvs" [5080c804-a6a8-4239-bd3f-a89d8f114f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:16:43.415853   22923 system_pods.go:89] "snapshot-controller-56fcc65765-b777z" [beb5ceb2-51fe-49bc-842c-800de73b7628] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.415859   22923 system_pods.go:89] "snapshot-controller-56fcc65765-s5z9r" [ba81ccfa-12e1-42cd-a9f0-d1cbff990eb6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.415864   22923 system_pods.go:89] "storage-provisioner" [b2787e80-d152-46a1-9672-af83ebbb8e9d] Running
	I0927 00:16:43.415873   22923 system_pods.go:126] duration metric: took 206.040673ms to wait for k8s-apps to be running ...
	I0927 00:16:43.415880   22923 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:16:43.415924   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:16:43.430904   22923 system_svc.go:56] duration metric: took 15.015476ms WaitForService to wait for kubelet
	I0927 00:16:43.430932   22923 kubeadm.go:582] duration metric: took 13.172753467s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:16:43.430948   22923 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:16:43.487452   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:43.487493   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.582042   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:43.610676   22923 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:16:43.610701   22923 node_conditions.go:123] node cpu capacity is 2
	I0927 00:16:43.610712   22923 node_conditions.go:105] duration metric: took 179.759493ms to run NodePressure ...
	I0927 00:16:43.610722   22923 start.go:241] waiting for startup goroutines ...
	I0927 00:16:43.812000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.992855   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:43.993405   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.094833   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:44.312025   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.488378   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:44.488875   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.580616   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:44.812847   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.987339   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.987844   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.081111   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:45.311986   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.488838   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:45.494394   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.588405   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:45.812585   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.988224   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.989896   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.082148   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:46.311599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.485928   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.488359   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:46.581225   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:46.811437   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.986958   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.988594   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:47.080381   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:47.311967   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.487137   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.487881   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:47.580513   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:47.812233   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.987205   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.988170   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:48.080591   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:48.312071   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.487224   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.488731   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:48.580104   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:48.811251   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.987100   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.987514   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.080480   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:49.311488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.486957   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:49.488676   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.580612   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:49.811224   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.990265   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.991510   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.082172   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:50.313347   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.486985   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.488717   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:50.582659   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:50.812000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.988005   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.988994   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:51.081167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:51.312257   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.486854   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.489465   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:51.580795   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:51.812289   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.987066   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.988257   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:52.081108   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:52.312912   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.486985   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.488399   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:52.581755   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:52.814422   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.987549   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.987829   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:53.080678   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:53.314523   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.488331   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.488764   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:53.580817   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:53.812217   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.986729   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.988945   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:54.080778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:54.312205   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.486448   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.487803   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:54.580761   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:54.811520   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.986634   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.988978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:55.080800   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:55.311991   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.490944   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.493634   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:55.580263   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:55.812139   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.987177   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.987367   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:56.081310   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:56.311167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:56.488842   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:56.488988   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:56.581030   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:56.812978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.543832   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:57.543896   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:57.544370   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.544723   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.550190   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:57.550636   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.581484   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:57.811591   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.988174   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.988206   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:58.081874   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:58.312600   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:58.486504   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.487586   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:58.580249   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:58.811581   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:58.986774   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.987922   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.080834   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:59.311658   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.487196   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:59.488229   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.580181   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:59.812375   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.988448   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.988687   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.080252   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:00.311409   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:00.487009   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.488155   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:00.581280   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:00.811845   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:00.987325   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.989570   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:01.080515   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:01.311993   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:01.487850   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.489334   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:01.580814   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:01.811806   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:01.986995   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.988430   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:02.080254   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:02.311725   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:02.487667   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:02.488220   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:02.580912   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.090517   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.090639   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:03.091263   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.091653   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.311887   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.487140   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.488145   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:03.581320   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.811596   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.987251   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.989014   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:04.081778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:04.312130   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:04.487412   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.488309   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:04.580589   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:04.811892   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:04.987356   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.987417   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:05.081474   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.311978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:05.487432   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.487863   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:05.580682   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.812085   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:05.988000   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.988066   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:06.080989   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.311398   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:06.486561   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.488291   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:06.580935   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.813281   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:06.986571   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.988032   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:07.080913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.314207   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:07.486814   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.488906   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:07.580735   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.812650   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:07.986719   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.987173   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:08.081186   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.311716   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:08.486681   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.487853   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:08.580832   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.812363   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:08.986729   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.988493   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:09.081403   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:09.312278   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:09.485989   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.487569   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:09.580021   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:09.810913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:09.987126   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.987866   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:10.080956   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:10.312137   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:10.487288   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.488658   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:10.580334   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:10.811041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:10.987011   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.987681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:11.080105   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:11.311345   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:11.486779   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:11.487979   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:11.581412   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:11.811943   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:11.987698   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:11.988990   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:12.080887   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:12.311909   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:12.489631   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:12.489995   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:12.588488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:12.811700   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:12.987600   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:12.988206   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:13.081015   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:13.311938   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:13.494362   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:13.494760   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:13.580352   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:13.812378   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:13.986892   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:13.988433   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:14.080520   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:14.312162   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:14.489857   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:14.494879   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:14.581191   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:14.811835   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:14.987031   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:14.988412   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:15.080463   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:15.312254   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:15.492564   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:15.492913   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:15.580514   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:15.811411   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:15.986710   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:15.988183   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:16.082151   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:16.311207   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:16.488013   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:16.488851   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:16.580681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:16.811685   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:16.987749   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:16.988504   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:17.080470   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:17.311695   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:17.486783   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:17.487109   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:17.581377   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:17.811534   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:17.986726   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:17.987427   22923 kapi.go:107] duration metric: took 40.003435933s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:17:18.081888   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:18.312758   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:18.487322   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:18.581069   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:18.811131   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:18.987552   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:19.081741   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:19.312438   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:19.486923   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:19.580490   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:19.811952   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:19.987035   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:20.081683   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:20.311815   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:20.487115   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:20.580786   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:20.812516   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:20.986767   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:21.081624   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:21.499313   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:21.500317   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:21.580769   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:21.812245   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:21.988673   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:22.080678   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:22.312325   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:22.486578   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:22.582419   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:22.811470   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:22.986785   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:23.080233   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:23.311183   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:23.486602   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:23.580948   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:23.812622   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:23.987481   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:24.081064   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:24.310966   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:24.486849   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:24.580734   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:24.811250   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:24.986458   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:25.083062   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:25.312905   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:25.488190   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:25.586419   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:25.812210   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:25.987787   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:26.081106   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:26.310603   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:26.503116   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:26.580733   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:26.812493   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:26.987376   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:27.080712   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.312863   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:27.486929   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:27.581037   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.811603   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:27.987405   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:28.080637   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.311085   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:28.486056   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:28.580113   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.811368   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:28.986515   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:29.081058   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.311442   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:29.486947   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:29.580754   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.811655   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:29.987571   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:30.080977   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.312032   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:30.486723   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:30.581611   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.811778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:30.987653   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:31.084236   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.311594   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:31.486542   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:31.581512   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.826040   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:31.987096   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:32.080580   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.312000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:32.487673   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:32.581375   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.812041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:32.988980   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:33.090694   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.312326   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:33.488231   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:33.580777   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.811345   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:33.986236   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:34.081390   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.312086   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:34.487244   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:34.581175   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.813913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:34.991040   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:35.090876   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.313501   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:35.486433   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:35.583246   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.811699   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:35.987680   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:36.080748   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.328503   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:36.488009   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:36.581253   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.810998   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:36.987755   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:37.080636   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.311688   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:37.486973   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:37.580599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.812272   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:37.986591   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:38.081184   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.311337   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:38.487175   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:38.581016   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.813136   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:38.987107   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:39.080496   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.312041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:39.486941   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:39.587727   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.811898   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:39.988300   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:40.081007   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.312655   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:40.486841   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:40.583017   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.814862   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:40.991378   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:41.084949   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.312488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:41.486705   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:41.583208   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.812185   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:41.987474   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:42.081648   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.320540   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:42.487828   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:42.588281   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.811937   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:42.987008   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:43.081062   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.312344   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:43.489462   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:43.580778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.812433   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:43.987514   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:44.087429   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.315287   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:44.487711   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:44.580200   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.811873   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.000196   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:45.080558   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.314997   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.492610   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:45.581681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.815128   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.987137   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:46.080783   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.312557   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:46.487720   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:46.583038   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.812051   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:46.986544   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:47.081350   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.311599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:47.487110   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:47.580700   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.812997   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:47.986922   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:48.080420   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.311397   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:48.486365   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:48.581127   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.815408   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:48.987143   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:49.080998   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.312595   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:49.486745   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:49.581175   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.812100   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:49.986765   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:50.080703   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.312173   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:50.487469   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:50.580789   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.813167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.004072   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:51.082921   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.315081   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.486907   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:51.582951   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.812667   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.986763   22923 kapi.go:107] duration metric: took 1m14.004357399s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:17:52.081726   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.312108   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:52.581247   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.811383   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:53.081164   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.311077   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:53.580614   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.811860   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:54.085731   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.311903   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:54.581015   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.812698   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:55.080114   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.312140   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:55.580929   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.812076   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:56.080795   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.315916   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:56.580324   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.813652   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:57.081490   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.318121   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:57.580543   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.813190   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:58.081274   22923 kapi.go:107] duration metric: took 1m17.004168732s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:17:58.083013   22923 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-364775 cluster.
	I0927 00:17:58.084321   22923 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:17:58.085650   22923 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:17:58.311273   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:58.813554   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:59.314920   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:59.811122   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:18:00.312742   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:18:00.813283   22923 kapi.go:107] duration metric: took 1m21.006383462s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:18:00.814917   22923 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0927 00:18:00.816192   22923 addons.go:510] duration metric: took 1m30.557986461s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher storage-provisioner ingress-dns nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0927 00:18:00.816230   22923 start.go:246] waiting for cluster config update ...
	I0927 00:18:00.816255   22923 start.go:255] writing updated cluster config ...
	I0927 00:18:00.816798   22923 ssh_runner.go:195] Run: rm -f paused
	I0927 00:18:00.876391   22923 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:18:00.878075   22923 out.go:177] * Done! kubectl is now configured to use "addons-364775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.864605298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396962864580124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=442ee9e0-c5ec-403f-9024-74a428210c3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.865437629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b06857f1-d4f2-425e-b096-d74b8022306a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.865494235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b06857f1-d4f2-425e-b096-d74b8022306a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.865796016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9758e9a4411fe087bc8831762671c4f6b47d76e38e4273fca5dd22b8a7456278,PodSandboxId:648120743e719c8b7d3a098c00d3960cf85955cdc24c522fa67cade5840d070a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727396955996124844,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-x9hv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86a23b4f-e160-433b-b168-d9458fb8b1de,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b6435d45d86ba1d6dc39dd8ceacf1c2a8cbac00201479cc3af3beb3a8bc465,PodSandboxId:5c06cf398461a183eaff98db964c09924a4d3e7240409efe4821afef6a8ab082,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396253074028946,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-ljq5t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f172857-2e68-4af5-8e3b-01d68b6db792,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66ac2c2cec7c0d94b29d54fffae2e0817f1b1e23b57db1ef3954fdf0b7868f97,PodSandboxId:136447c84e896287a7df88ae9c408a61002b340122e57e84c8953edac27c8d14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396252934896617,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9h7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 186ee242-70b2-44fa-97c3-6e02dbe6c6db,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172739
6201136847565,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727396195134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915
af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea2
9d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b06857f1-d4f2-425e-b096-d74b8022306a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.906343124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc627422-5022-484b-b398-a72aff3bec0c name=/runtime.v1.RuntimeService/Version
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.906434899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc627422-5022-484b-b398-a72aff3bec0c name=/runtime.v1.RuntimeService/Version
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.908275682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=788f2aa7-8b4e-4192-89f4-222fd984bec4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.910371548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396962910344133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=788f2aa7-8b4e-4192-89f4-222fd984bec4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.911101500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ebce14e-c65b-4494-9097-e60e6aab61ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.911154678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ebce14e-c65b-4494-9097-e60e6aab61ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.911576830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9758e9a4411fe087bc8831762671c4f6b47d76e38e4273fca5dd22b8a7456278,PodSandboxId:648120743e719c8b7d3a098c00d3960cf85955cdc24c522fa67cade5840d070a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727396955996124844,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-x9hv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86a23b4f-e160-433b-b168-d9458fb8b1de,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b6435d45d86ba1d6dc39dd8ceacf1c2a8cbac00201479cc3af3beb3a8bc465,PodSandboxId:5c06cf398461a183eaff98db964c09924a4d3e7240409efe4821afef6a8ab082,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396253074028946,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-ljq5t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f172857-2e68-4af5-8e3b-01d68b6db792,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66ac2c2cec7c0d94b29d54fffae2e0817f1b1e23b57db1ef3954fdf0b7868f97,PodSandboxId:136447c84e896287a7df88ae9c408a61002b340122e57e84c8953edac27c8d14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396252934896617,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9h7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 186ee242-70b2-44fa-97c3-6e02dbe6c6db,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172739
6201136847565,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727396195134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915
af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea2
9d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ebce14e-c65b-4494-9097-e60e6aab61ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.945243103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40f74cee-3c11-47c6-b052-421e8ee32d88 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.945314358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40f74cee-3c11-47c6-b052-421e8ee32d88 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.951132183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54b5fa0d-1fb2-4367-9e85-e45458fdebd9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.952386026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396962952359087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54b5fa0d-1fb2-4367-9e85-e45458fdebd9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.952909582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a752566-57cf-4553-a5fc-4b4c0ce1a2e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.953033826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a752566-57cf-4553-a5fc-4b4c0ce1a2e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.953328314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9758e9a4411fe087bc8831762671c4f6b47d76e38e4273fca5dd22b8a7456278,PodSandboxId:648120743e719c8b7d3a098c00d3960cf85955cdc24c522fa67cade5840d070a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727396955996124844,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-x9hv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86a23b4f-e160-433b-b168-d9458fb8b1de,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b6435d45d86ba1d6dc39dd8ceacf1c2a8cbac00201479cc3af3beb3a8bc465,PodSandboxId:5c06cf398461a183eaff98db964c09924a4d3e7240409efe4821afef6a8ab082,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396253074028946,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-ljq5t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f172857-2e68-4af5-8e3b-01d68b6db792,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66ac2c2cec7c0d94b29d54fffae2e0817f1b1e23b57db1ef3954fdf0b7868f97,PodSandboxId:136447c84e896287a7df88ae9c408a61002b340122e57e84c8953edac27c8d14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396252934896617,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9h7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 186ee242-70b2-44fa-97c3-6e02dbe6c6db,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172739
6201136847565,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727396195134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915
af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea2
9d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a752566-57cf-4553-a5fc-4b4c0ce1a2e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.990129432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a54a2e42-543a-42f4-ac5b-bbdabcfbfe1b name=/runtime.v1.RuntimeService/Version
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.990201112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a54a2e42-543a-42f4-ac5b-bbdabcfbfe1b name=/runtime.v1.RuntimeService/Version
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.991087320Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ac996ef-14bd-4a2c-b760-382346492418 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.992213243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396962992185558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ac996ef-14bd-4a2c-b760-382346492418 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.992734970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1875f5ad-9211-4e14-9e4b-d66bcdce59a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.992816919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1875f5ad-9211-4e14-9e4b-d66bcdce59a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:29:22 addons-364775 crio[667]: time="2024-09-27 00:29:22.993206706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9758e9a4411fe087bc8831762671c4f6b47d76e38e4273fca5dd22b8a7456278,PodSandboxId:648120743e719c8b7d3a098c00d3960cf85955cdc24c522fa67cade5840d070a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727396955996124844,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-x9hv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86a23b4f-e160-433b-b168-d9458fb8b1de,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b6435d45d86ba1d6dc39dd8ceacf1c2a8cbac00201479cc3af3beb3a8bc465,PodSandboxId:5c06cf398461a183eaff98db964c09924a4d3e7240409efe4821afef6a8ab082,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396253074028946,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-ljq5t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2f172857-2e68-4af5-8e3b-01d68b6db792,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66ac2c2cec7c0d94b29d54fffae2e0817f1b1e23b57db1ef3954fdf0b7868f97,PodSandboxId:136447c84e896287a7df88ae9c408a61002b340122e57e84c8953edac27c8d14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727396252934896617,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9h7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 186ee242-70b2-44fa-97c3-6e02dbe6c6db,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172739
6201136847565,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727396195134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915
af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea2
9d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1875f5ad-9211-4e14-9e4b-d66bcdce59a8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9758e9a4411fe       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   648120743e719       hello-world-app-55bf9c44b4-x9hv6
	34468cf471df6       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   d1dd36f55b9f4       nginx
	44f5c0760c47e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   3f91389aebb94       gcp-auth-89d5ffd79-xndcj
	e0b6435d45d86       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              patch                     0                   5c06cf398461a       ingress-nginx-admission-patch-ljq5t
	66ac2c2cec7c0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   136447c84e896       ingress-nginx-admission-create-s9h7h
	77e2cbcfd0c9c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   e55373ee38096       metrics-server-84c5f94fbc-h74zz
	2392c10311ecb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   c88fbf538e039       storage-provisioner
	eb092a183ee87       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   9c525627d0e81       coredns-7c65d6cfc9-gd2h2
	fa7e6a02565d0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   24f13f826689a       kube-proxy-vj2cl
	ee201c0719a52       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   27e2544550560       etcd-addons-364775
	941f64fde84f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   4602faee6ddea       kube-apiserver-addons-364775
	7d21d052488b3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   6bb1edfce2faf       kube-scheduler-addons-364775
	02d48ea4cc0d3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   81dc5c65d7d85       kube-controller-manager-addons-364775
	
	
	==> coredns [eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2] <==
	[INFO] 127.0.0.1:50766 - 8775 "HINFO IN 3569014972345960485.1862048380583480753. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014022704s
	[INFO] 10.244.0.7:39054 - 16199 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000318748s
	[INFO] 10.244.0.7:39054 - 31015 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000093499s
	[INFO] 10.244.0.7:39054 - 24769 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000150069s
	[INFO] 10.244.0.7:39054 - 3407 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000172928s
	[INFO] 10.244.0.7:39054 - 53162 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097552s
	[INFO] 10.244.0.7:39054 - 32704 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00006962s
	[INFO] 10.244.0.7:39054 - 46163 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114352s
	[INFO] 10.244.0.7:39054 - 45726 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000079808s
	[INFO] 10.244.0.7:55575 - 58922 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122896s
	[INFO] 10.244.0.7:55575 - 58635 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000056553s
	[INFO] 10.244.0.7:34701 - 2635 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052467s
	[INFO] 10.244.0.7:34701 - 2443 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088571s
	[INFO] 10.244.0.7:53770 - 29791 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083808s
	[INFO] 10.244.0.7:53770 - 29618 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043278s
	[INFO] 10.244.0.7:51278 - 32481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061908s
	[INFO] 10.244.0.7:51278 - 32630 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010053s
	[INFO] 10.244.0.21:39399 - 32421 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000626795s
	[INFO] 10.244.0.21:51047 - 35722 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000173759s
	[INFO] 10.244.0.21:59883 - 41503 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105903s
	[INFO] 10.244.0.21:43597 - 17694 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000060022s
	[INFO] 10.244.0.21:58239 - 38522 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106047s
	[INFO] 10.244.0.21:38772 - 6309 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000376339s
	[INFO] 10.244.0.21:41727 - 3859 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001416366s
	[INFO] 10.244.0.21:49529 - 27922 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001747962s
	
	
	==> describe nodes <==
	Name:               addons-364775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-364775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-364775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_16_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-364775
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:16:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-364775
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:29:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:27:29 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:27:29 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:27:29 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:27:29 +0000   Fri, 27 Sep 2024 00:16:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    addons-364775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c20e89c92c64839b60418c495bf40ff
	  System UUID:                9c20e89c-92c6-4839-b604-18c495bf40ff
	  Boot ID:                    de047c3a-8269-46a9-afd9-1cfad2a2ee3d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-x9hv6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-89d5ffd79-xndcj                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-gd2h2                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-364775                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-364775             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-364775    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vj2cl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-364775             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-h74zz          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-364775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-364775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-364775 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-364775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-364775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-364775 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-364775 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-364775 event: Registered Node addons-364775 in Controller
	
	
	==> dmesg <==
	[  +5.471676] kauditd_printk_skb: 137 callbacks suppressed
	[ +11.036796] kauditd_printk_skb: 79 callbacks suppressed
	[Sep27 00:17] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.888391] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.910967] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.507302] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.437195] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.152093] kauditd_printk_skb: 43 callbacks suppressed
	[ +10.173097] kauditd_printk_skb: 6 callbacks suppressed
	[Sep27 00:18] kauditd_printk_skb: 55 callbacks suppressed
	[Sep27 00:19] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:20] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:23] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:26] kauditd_printk_skb: 28 callbacks suppressed
	[ +10.244894] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.025310] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.494292] kauditd_printk_skb: 7 callbacks suppressed
	[ +24.636506] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.016348] kauditd_printk_skb: 38 callbacks suppressed
	[Sep27 00:27] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.266598] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.065698] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.072950] kauditd_printk_skb: 25 callbacks suppressed
	[ +27.283174] kauditd_printk_skb: 4 callbacks suppressed
	[Sep27 00:29] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754] <==
	{"level":"warn","ts":"2024-09-27T00:26:14.632664Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:14.223889Z","time spent":"408.764445ms","remote":"127.0.0.1:41964","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":1458,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc\" "}
	{"level":"warn","ts":"2024-09-27T00:26:14.632888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.794288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-27T00:26:14.632907Z","caller":"traceutil/trace.go:171","msg":"trace[446996669] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1984; }","duration":"376.812569ms","start":"2024-09-27T00:26:14.256088Z","end":"2024-09-27T00:26:14.632900Z","steps":["trace[446996669] 'range keys from in-memory index tree'  (duration: 376.72393ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:14.632926Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:14.256037Z","time spent":"376.885356ms","remote":"127.0.0.1:41978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1138,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-27T00:26:14.633120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.527313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-09-27T00:26:14.633138Z","caller":"traceutil/trace.go:171","msg":"trace[1605129029] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1984; }","duration":"307.54545ms","start":"2024-09-27T00:26:14.325586Z","end":"2024-09-27T00:26:14.633132Z","steps":["trace[1605129029] 'range keys from in-memory index tree'  (duration: 307.476662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:14.633154Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:14.325554Z","time spent":"307.597008ms","remote":"127.0.0.1:42020","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":207,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
	{"level":"warn","ts":"2024-09-27T00:26:14.633233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.753441ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:14.633249Z","caller":"traceutil/trace.go:171","msg":"trace[617859409] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1984; }","duration":"272.780609ms","start":"2024-09-27T00:26:14.360462Z","end":"2024-09-27T00:26:14.633243Z","steps":["trace[617859409] 'range keys from in-memory index tree'  (duration: 272.748633ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:14.633315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.17705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-27T00:26:14.633328Z","caller":"traceutil/trace.go:171","msg":"trace[942278298] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1984; }","duration":"236.191523ms","start":"2024-09-27T00:26:14.397131Z","end":"2024-09-27T00:26:14.633323Z","steps":["trace[942278298] 'count revisions from in-memory index tree'  (duration: 236.13798ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:26:20.180521Z","caller":"traceutil/trace.go:171","msg":"trace[1946292725] linearizableReadLoop","detail":"{readStateIndex:2155; appliedIndex:2154; }","duration":"169.940839ms","start":"2024-09-27T00:26:20.010565Z","end":"2024-09-27T00:26:20.180506Z","steps":["trace[1946292725] 'read index received'  (duration: 168.170478ms)","trace[1946292725] 'applied index is now lower than readState.Index'  (duration: 1.769835ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:26:20.180632Z","caller":"traceutil/trace.go:171","msg":"trace[119175638] transaction","detail":"{read_only:false; response_revision:2010; number_of_response:1; }","duration":"185.041203ms","start":"2024-09-27T00:26:19.995581Z","end":"2024-09-27T00:26:20.180622Z","steps":["trace[119175638] 'process raft request'  (duration: 183.199927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:20.180763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.179973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-27T00:26:20.180783Z","caller":"traceutil/trace.go:171","msg":"trace[929737590] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:2010; }","duration":"170.214606ms","start":"2024-09-27T00:26:20.010561Z","end":"2024-09-27T00:26:20.180775Z","steps":["trace[929737590] 'agreement among raft nodes before linearized reading'  (duration: 170.14061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:20.180846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.335773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:20.180885Z","caller":"traceutil/trace.go:171","msg":"trace[1760975757] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2010; }","duration":"102.380651ms","start":"2024-09-27T00:26:20.078497Z","end":"2024-09-27T00:26:20.180878Z","steps":["trace[1760975757] 'agreement among raft nodes before linearized reading'  (duration: 102.322144ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:26:20.844201Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1536}
	{"level":"info","ts":"2024-09-27T00:26:20.885577Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1536,"took":"40.935931ms","hash":3628088381,"current-db-size-bytes":6135808,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3530752,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-27T00:26:20.885633Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3628088381,"revision":1536,"compact-revision":-1}
	{"level":"info","ts":"2024-09-27T00:26:47.157301Z","caller":"traceutil/trace.go:171","msg":"trace[683330143] linearizableReadLoop","detail":"{readStateIndex:2316; appliedIndex:2315; }","duration":"248.104512ms","start":"2024-09-27T00:26:46.909171Z","end":"2024-09-27T00:26:47.157276Z","steps":["trace[683330143] 'read index received'  (duration: 247.914744ms)","trace[683330143] 'applied index is now lower than readState.Index'  (duration: 188.919µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:26:47.157488Z","caller":"traceutil/trace.go:171","msg":"trace[1122576871] transaction","detail":"{read_only:false; response_revision:2162; number_of_response:1; }","duration":"349.484715ms","start":"2024-09-27T00:26:46.807988Z","end":"2024-09-27T00:26:47.157473Z","steps":["trace[1122576871] 'process raft request'  (duration: 349.152553ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:47.158481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:46.807932Z","time spent":"350.369978ms","remote":"127.0.0.1:41978","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2157 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-27T00:26:47.157668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.429269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:47.158706Z","caller":"traceutil/trace.go:171","msg":"trace[1301308464] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2162; }","duration":"249.522193ms","start":"2024-09-27T00:26:46.909168Z","end":"2024-09-27T00:26:47.158690Z","steps":["trace[1301308464] 'agreement among raft nodes before linearized reading'  (duration: 248.407046ms)"],"step_count":1}
	
	
	==> gcp-auth [44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe] <==
	2024/09/27 00:18:01 Ready to write response ...
	2024/09/27 00:18:01 Ready to marshal response ...
	2024/09/27 00:18:01 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:14 Ready to marshal response ...
	2024/09/27 00:26:14 Ready to write response ...
	2024/09/27 00:26:14 Ready to marshal response ...
	2024/09/27 00:26:14 Ready to write response ...
	2024/09/27 00:26:49 Ready to marshal response ...
	2024/09/27 00:26:49 Ready to write response ...
	2024/09/27 00:26:54 Ready to marshal response ...
	2024/09/27 00:26:54 Ready to write response ...
	2024/09/27 00:27:06 Ready to marshal response ...
	2024/09/27 00:27:06 Ready to write response ...
	2024/09/27 00:27:06 Ready to marshal response ...
	2024/09/27 00:27:06 Ready to write response ...
	2024/09/27 00:27:18 Ready to marshal response ...
	2024/09/27 00:27:18 Ready to write response ...
	2024/09/27 00:29:12 Ready to marshal response ...
	2024/09/27 00:29:12 Ready to write response ...
	
	
	==> kernel <==
	 00:29:23 up 13 min,  0 users,  load average: 0.67, 0.70, 0.50
	Linux addons-364775 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8] <==
	E0927 00:17:45.543687       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	E0927 00:17:45.547748       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	E0927 00:17:45.559080       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	I0927 00:17:45.702853       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0927 00:26:04.624102       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.136.26"}
	I0927 00:26:29.141449       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 00:26:33.135917       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0927 00:26:34.161474       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0927 00:26:54.854249       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0927 00:26:55.039695       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.233.173"}
	I0927 00:27:06.182818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.185454       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.203612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.203649       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.226306       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.226388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.238166       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.238291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.268140       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.268284       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:27:07.236715       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:27:07.269274       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:27:07.372522       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0927 00:27:34.394838       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0927 00:29:13.133808       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.51.24"}
	
	
	==> kube-controller-manager [02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee] <==
	W0927 00:28:11.355205       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:11.355273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:27.277609       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:27.277688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:30.591804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:30.591847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:39.732746       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:39.732871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:47.753065       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:47.753121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:29:12.648555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:29:12.648772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:29:12.962277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.159857ms"
	I0927 00:29:12.984569       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="22.183338ms"
	I0927 00:29:12.984831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="78.493µs"
	I0927 00:29:12.985144       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="61.136µs"
	W0927 00:29:14.458336       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:29:14.458406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:29:15.037269       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0927 00:29:15.041777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="3.345µs"
	I0927 00:29:15.046742       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0927 00:29:16.393727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.611345ms"
	I0927 00:29:16.394118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="72.266µs"
	W0927 00:29:22.666732       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:29:22.666782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:16:31.768151       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:16:31.776690       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.169"]
	E0927 00:16:31.776745       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:16:31.867724       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:16:31.867754       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:16:31.867779       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:16:31.872020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:16:31.872322       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:16:31.872352       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:16:31.876064       1 config.go:328] "Starting node config controller"
	I0927 00:16:31.876094       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:16:31.876473       1 config.go:199] "Starting service config controller"
	I0927 00:16:31.876483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:16:31.876500       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:16:31.876504       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:16:31.977065       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:16:31.977110       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:16:31.977424       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54] <==
	W0927 00:16:22.386330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:16:22.386360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.386430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:16:22.386640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.386867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:16:22.388785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.389761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:16:22.394000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.238556       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:16:23.238927       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:16:23.244304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:16:23.244370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.281738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:16:23.282013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.416794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:16:23.417002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.467991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:16:23.468110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.603228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:16:23.603279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.603337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:16:23.603364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.619906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:16:23.619937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:16:26.272381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:29:13 addons-364775 kubelet[1215]: I0927 00:29:13.019847    1215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/86a23b4f-e160-433b-b168-d9458fb8b1de-gcp-creds\") pod \"hello-world-app-55bf9c44b4-x9hv6\" (UID: \"86a23b4f-e160-433b-b168-d9458fb8b1de\") " pod="default/hello-world-app-55bf9c44b4-x9hv6"
	Sep 27 00:29:14 addons-364775 kubelet[1215]: I0927 00:29:14.226802    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xs7w\" (UniqueName: \"kubernetes.io/projected/8bb056cc-4ad8-48da-bad9-aec78168a573-kube-api-access-6xs7w\") pod \"8bb056cc-4ad8-48da-bad9-aec78168a573\" (UID: \"8bb056cc-4ad8-48da-bad9-aec78168a573\") "
	Sep 27 00:29:14 addons-364775 kubelet[1215]: I0927 00:29:14.235352    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb056cc-4ad8-48da-bad9-aec78168a573-kube-api-access-6xs7w" (OuterVolumeSpecName: "kube-api-access-6xs7w") pod "8bb056cc-4ad8-48da-bad9-aec78168a573" (UID: "8bb056cc-4ad8-48da-bad9-aec78168a573"). InnerVolumeSpecName "kube-api-access-6xs7w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:29:14 addons-364775 kubelet[1215]: I0927 00:29:14.328168    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6xs7w\" (UniqueName: \"kubernetes.io/projected/8bb056cc-4ad8-48da-bad9-aec78168a573-kube-api-access-6xs7w\") on node \"addons-364775\" DevicePath \"\""
	Sep 27 00:29:14 addons-364775 kubelet[1215]: I0927 00:29:14.354650    1215 scope.go:117] "RemoveContainer" containerID="783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d"
	Sep 27 00:29:14 addons-364775 kubelet[1215]: I0927 00:29:14.379097    1215 scope.go:117] "RemoveContainer" containerID="783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d"
	Sep 27 00:29:14 addons-364775 kubelet[1215]: E0927 00:29:14.379866    1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d\": container with ID starting with 783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d not found: ID does not exist" containerID="783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d"
	Sep 27 00:29:14 addons-364775 kubelet[1215]: I0927 00:29:14.379916    1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d"} err="failed to get container status \"783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d\": rpc error: code = NotFound desc = could not find container \"783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d\": container with ID starting with 783b25dfa3713591d703f7a84bb3d46e56d7b503605979b66d3e1446574c485d not found: ID does not exist"
	Sep 27 00:29:14 addons-364775 kubelet[1215]: I0927 00:29:14.827160    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb056cc-4ad8-48da-bad9-aec78168a573" path="/var/lib/kubelet/pods/8bb056cc-4ad8-48da-bad9-aec78168a573/volumes"
	Sep 27 00:29:15 addons-364775 kubelet[1215]: E0927 00:29:15.116056    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396955115654891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:555086,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:29:15 addons-364775 kubelet[1215]: E0927 00:29:15.116080    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727396955115654891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:555086,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:29:16 addons-364775 kubelet[1215]: I0927 00:29:16.826062    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="186ee242-70b2-44fa-97c3-6e02dbe6c6db" path="/var/lib/kubelet/pods/186ee242-70b2-44fa-97c3-6e02dbe6c6db/volumes"
	Sep 27 00:29:16 addons-364775 kubelet[1215]: I0927 00:29:16.826482    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f172857-2e68-4af5-8e3b-01d68b6db792" path="/var/lib/kubelet/pods/2f172857-2e68-4af5-8e3b-01d68b6db792/volumes"
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.358258    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fcdt\" (UniqueName: \"kubernetes.io/projected/05ceb4d4-fce0-42a2-955e-20ca7157e61d-kube-api-access-9fcdt\") pod \"05ceb4d4-fce0-42a2-955e-20ca7157e61d\" (UID: \"05ceb4d4-fce0-42a2-955e-20ca7157e61d\") "
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.358325    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05ceb4d4-fce0-42a2-955e-20ca7157e61d-webhook-cert\") pod \"05ceb4d4-fce0-42a2-955e-20ca7157e61d\" (UID: \"05ceb4d4-fce0-42a2-955e-20ca7157e61d\") "
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.361082    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ceb4d4-fce0-42a2-955e-20ca7157e61d-kube-api-access-9fcdt" (OuterVolumeSpecName: "kube-api-access-9fcdt") pod "05ceb4d4-fce0-42a2-955e-20ca7157e61d" (UID: "05ceb4d4-fce0-42a2-955e-20ca7157e61d"). InnerVolumeSpecName "kube-api-access-9fcdt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.363152    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ceb4d4-fce0-42a2-955e-20ca7157e61d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "05ceb4d4-fce0-42a2-955e-20ca7157e61d" (UID: "05ceb4d4-fce0-42a2-955e-20ca7157e61d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.380335    1215 scope.go:117] "RemoveContainer" containerID="4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928"
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.401616    1215 scope.go:117] "RemoveContainer" containerID="4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928"
	Sep 27 00:29:18 addons-364775 kubelet[1215]: E0927 00:29:18.402207    1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928\": container with ID starting with 4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928 not found: ID does not exist" containerID="4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928"
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.402253    1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928"} err="failed to get container status \"4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928\": rpc error: code = NotFound desc = could not find container \"4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928\": container with ID starting with 4f91cdf813e0544e2bbce54611b768c856090e4e2aa0ee638eb6ab3280293928 not found: ID does not exist"
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.458628    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9fcdt\" (UniqueName: \"kubernetes.io/projected/05ceb4d4-fce0-42a2-955e-20ca7157e61d-kube-api-access-9fcdt\") on node \"addons-364775\" DevicePath \"\""
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.458671    1215 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05ceb4d4-fce0-42a2-955e-20ca7157e61d-webhook-cert\") on node \"addons-364775\" DevicePath \"\""
	Sep 27 00:29:18 addons-364775 kubelet[1215]: I0927 00:29:18.826360    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ceb4d4-fce0-42a2-955e-20ca7157e61d" path="/var/lib/kubelet/pods/05ceb4d4-fce0-42a2-955e-20ca7157e61d/volumes"
	Sep 27 00:29:21 addons-364775 kubelet[1215]: E0927 00:29:21.822583    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7b7dbf55-2e42-4482-a77e-05baf4945f79"
	
	
	==> storage-provisioner [2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e] <==
	I0927 00:16:37.916328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:16:38.076551       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:16:38.076614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:16:38.159162       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:16:38.159377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-364775_daea0619-9535-4149-a165-9a8f7ab27789!
	I0927 00:16:38.160542       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88a6a7b1-44d1-4b8a-9c87-da3ce2ecdc13", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-364775_daea0619-9535-4149-a165-9a8f7ab27789 became leader
	I0927 00:16:38.760305       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-364775_daea0619-9535-4149-a165-9a8f7ab27789!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-364775 -n addons-364775
helpers_test.go:261: (dbg) Run:  kubectl --context addons-364775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-364775 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-364775 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-364775/192.168.39.169
	Start Time:       Fri, 27 Sep 2024 00:18:01 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxclv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wxclv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-364775
	  Normal   Pulling    9m53s (x4 over 11m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m53s (x4 over 11m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m53s (x4 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m42s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    79s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (149.54s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (321.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 4.196944ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0927 00:26:03.799967   22138 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:26:03.799996   22138 kapi.go:107] duration metric: took 7.093519ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-84c5f94fbc-h74zz" [1ee23e82-6d41-48b5-a303-16f6ebd60172] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003997101s
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (74.608432ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 9m39.87405774s

                                                
                                                
** /stderr **
I0927 00:26:09.875968   22138 retry.go:31] will retry after 2.651376097s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (66.67968ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 9m42.59275113s

                                                
                                                
** /stderr **
I0927 00:26:12.594535   22138 retry.go:31] will retry after 6.179305328s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (68.568351ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 9m48.841748988s

                                                
                                                
** /stderr **
I0927 00:26:18.843345   22138 retry.go:31] will retry after 8.086025313s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (68.513615ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 9m56.997333112s

                                                
                                                
** /stderr **
I0927 00:26:26.998774   22138 retry.go:31] will retry after 10.683474952s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (70.714061ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 10m7.752229074s

                                                
                                                
** /stderr **
I0927 00:26:37.753635   22138 retry.go:31] will retry after 20.14397723s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (76.686564ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 10m27.973880341s

                                                
                                                
** /stderr **
I0927 00:26:57.975584   22138 retry.go:31] will retry after 30.934701145s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (62.499422ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 10m58.975680104s

                                                
                                                
** /stderr **
I0927 00:27:28.977414   22138 retry.go:31] will retry after 20.42987285s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (62.537279ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 11m19.469331314s

                                                
                                                
** /stderr **
I0927 00:27:49.470954   22138 retry.go:31] will retry after 30.918216454s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (60.696729ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 11m50.45473365s

                                                
                                                
** /stderr **
I0927 00:28:20.456552   22138 retry.go:31] will retry after 1m19.380486551s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (60.233756ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 13m9.900962219s

                                                
                                                
** /stderr **
I0927 00:29:39.902750   22138 retry.go:31] will retry after 30.909677673s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (63.550536ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 13m40.880817351s

                                                
                                                
** /stderr **
I0927 00:30:10.882851   22138 retry.go:31] will retry after 1m11.976388108s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-364775 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-364775 top pods -n kube-system: exit status 1 (64.090949ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-gd2h2, age: 14m52.92276992s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-364775 -n addons-364775
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 logs -n 25: (1.390718689s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-528649                                                                     | download-only-528649 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-603097                                                                     | download-only-603097 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-528649                                                                     | download-only-528649 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-381196 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | binary-mirror-381196                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32921                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-381196                                                                     | binary-mirror-381196 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| addons  | enable dashboard -p                                                                         | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | addons-364775                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | addons-364775                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-364775 --wait=true                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:18 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | -p addons-364775                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | addons-364775                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | -p addons-364775                                                                            |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:26 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-364775 addons                                                                        | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:26 UTC | 27 Sep 24 00:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-364775 ssh curl -s                                                                   | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-364775 addons                                                                        | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-364775 ip                                                                            | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-364775 ssh cat                                                                       | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | /opt/local-path-provisioner/pvc-eaf13455-05db-4681-afdd-103662b6f350_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | addons-364775                                                                               |                      |         |         |                     |                     |
	| ip      | addons-364775 ip                                                                            | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:29 UTC | 27 Sep 24 00:29 UTC |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:29 UTC | 27 Sep 24 00:29 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-364775 addons disable                                                                | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:29 UTC | 27 Sep 24 00:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-364775 addons                                                                        | addons-364775        | jenkins | v1.34.0 | 27 Sep 24 00:31 UTC | 27 Sep 24 00:31 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:15:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:15:44.537636   22923 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:15:44.537740   22923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:44.537749   22923 out.go:358] Setting ErrFile to fd 2...
	I0927 00:15:44.537753   22923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:44.537907   22923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:15:44.538451   22923 out.go:352] Setting JSON to false
	I0927 00:15:44.539227   22923 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3490,"bootTime":1727392655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:15:44.539333   22923 start.go:139] virtualization: kvm guest
	I0927 00:15:44.541421   22923 out.go:177] * [addons-364775] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:15:44.542612   22923 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:15:44.542608   22923 notify.go:220] Checking for updates...
	I0927 00:15:44.544937   22923 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:15:44.546076   22923 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:15:44.547130   22923 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:44.548170   22923 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:15:44.549152   22923 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:15:44.550537   22923 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:15:44.580671   22923 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 00:15:44.581804   22923 start.go:297] selected driver: kvm2
	I0927 00:15:44.581814   22923 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:15:44.581825   22923 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:15:44.582527   22923 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:44.582595   22923 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:15:44.596734   22923 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:15:44.596791   22923 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:15:44.597022   22923 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:15:44.597049   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:15:44.597085   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:15:44.597092   22923 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:15:44.597139   22923 start.go:340] cluster config:
	{Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:44.597233   22923 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:44.598769   22923 out.go:177] * Starting "addons-364775" primary control-plane node in "addons-364775" cluster
	I0927 00:15:44.599805   22923 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:15:44.599844   22923 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:15:44.599854   22923 cache.go:56] Caching tarball of preloaded images
	I0927 00:15:44.599915   22923 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:15:44.599926   22923 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:15:44.600208   22923 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json ...
	I0927 00:15:44.600224   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json: {Name:mk7d83f0775700fae5c444ee1119498cda71b7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:44.600357   22923 start.go:360] acquireMachinesLock for addons-364775: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:15:44.600399   22923 start.go:364] duration metric: took 29.224µs to acquireMachinesLock for "addons-364775"
	I0927 00:15:44.600416   22923 start.go:93] Provisioning new machine with config: &{Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:15:44.600461   22923 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 00:15:44.602317   22923 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0927 00:15:44.602440   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:15:44.602479   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:44.616122   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0927 00:15:44.616559   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:44.617071   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:15:44.617091   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:44.617371   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:44.617525   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:15:44.617640   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:15:44.617745   22923 start.go:159] libmachine.API.Create for "addons-364775" (driver="kvm2")
	I0927 00:15:44.617772   22923 client.go:168] LocalClient.Create starting
	I0927 00:15:44.617816   22923 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:15:44.773115   22923 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:15:45.021396   22923 main.go:141] libmachine: Running pre-create checks...
	I0927 00:15:45.021422   22923 main.go:141] libmachine: (addons-364775) Calling .PreCreateCheck
	I0927 00:15:45.021848   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:15:45.022228   22923 main.go:141] libmachine: Creating machine...
	I0927 00:15:45.022241   22923 main.go:141] libmachine: (addons-364775) Calling .Create
	I0927 00:15:45.022354   22923 main.go:141] libmachine: (addons-364775) Creating KVM machine...
	I0927 00:15:45.023487   22923 main.go:141] libmachine: (addons-364775) DBG | found existing default KVM network
	I0927 00:15:45.024131   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.024009   22945 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0927 00:15:45.024171   22923 main.go:141] libmachine: (addons-364775) DBG | created network xml: 
	I0927 00:15:45.024195   22923 main.go:141] libmachine: (addons-364775) DBG | <network>
	I0927 00:15:45.024208   22923 main.go:141] libmachine: (addons-364775) DBG |   <name>mk-addons-364775</name>
	I0927 00:15:45.024226   22923 main.go:141] libmachine: (addons-364775) DBG |   <dns enable='no'/>
	I0927 00:15:45.024270   22923 main.go:141] libmachine: (addons-364775) DBG |   
	I0927 00:15:45.024294   22923 main.go:141] libmachine: (addons-364775) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 00:15:45.024303   22923 main.go:141] libmachine: (addons-364775) DBG |     <dhcp>
	I0927 00:15:45.024311   22923 main.go:141] libmachine: (addons-364775) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 00:15:45.024318   22923 main.go:141] libmachine: (addons-364775) DBG |     </dhcp>
	I0927 00:15:45.024325   22923 main.go:141] libmachine: (addons-364775) DBG |   </ip>
	I0927 00:15:45.024331   22923 main.go:141] libmachine: (addons-364775) DBG |   
	I0927 00:15:45.024337   22923 main.go:141] libmachine: (addons-364775) DBG | </network>
	I0927 00:15:45.024345   22923 main.go:141] libmachine: (addons-364775) DBG | 
	I0927 00:15:45.029333   22923 main.go:141] libmachine: (addons-364775) DBG | trying to create private KVM network mk-addons-364775 192.168.39.0/24...
	I0927 00:15:45.091813   22923 main.go:141] libmachine: (addons-364775) DBG | private KVM network mk-addons-364775 192.168.39.0/24 created
	I0927 00:15:45.091853   22923 main.go:141] libmachine: (addons-364775) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 ...
	I0927 00:15:45.091879   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.091772   22945 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:45.091922   22923 main.go:141] libmachine: (addons-364775) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:15:45.091959   22923 main.go:141] libmachine: (addons-364775) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:15:45.348792   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.348685   22945 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa...
	I0927 00:15:45.574205   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.574081   22945 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/addons-364775.rawdisk...
	I0927 00:15:45.574239   22923 main.go:141] libmachine: (addons-364775) DBG | Writing magic tar header
	I0927 00:15:45.574255   22923 main.go:141] libmachine: (addons-364775) DBG | Writing SSH key tar header
	I0927 00:15:45.574273   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:45.574195   22945 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 ...
	I0927 00:15:45.574290   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775
	I0927 00:15:45.574318   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:15:45.574327   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:45.574338   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:15:45.574351   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:15:45.574364   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775 (perms=drwx------)
	I0927 00:15:45.574372   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:15:45.574384   22923 main.go:141] libmachine: (addons-364775) DBG | Checking permissions on dir: /home
	I0927 00:15:45.574390   22923 main.go:141] libmachine: (addons-364775) DBG | Skipping /home - not owner
	I0927 00:15:45.574400   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:15:45.574428   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:15:45.574447   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:15:45.574477   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:15:45.574496   22923 main.go:141] libmachine: (addons-364775) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:15:45.574506   22923 main.go:141] libmachine: (addons-364775) Creating domain...
	I0927 00:15:45.575497   22923 main.go:141] libmachine: (addons-364775) define libvirt domain using xml: 
	I0927 00:15:45.575515   22923 main.go:141] libmachine: (addons-364775) <domain type='kvm'>
	I0927 00:15:45.575525   22923 main.go:141] libmachine: (addons-364775)   <name>addons-364775</name>
	I0927 00:15:45.575532   22923 main.go:141] libmachine: (addons-364775)   <memory unit='MiB'>4000</memory>
	I0927 00:15:45.575541   22923 main.go:141] libmachine: (addons-364775)   <vcpu>2</vcpu>
	I0927 00:15:45.575545   22923 main.go:141] libmachine: (addons-364775)   <features>
	I0927 00:15:45.575552   22923 main.go:141] libmachine: (addons-364775)     <acpi/>
	I0927 00:15:45.575556   22923 main.go:141] libmachine: (addons-364775)     <apic/>
	I0927 00:15:45.575560   22923 main.go:141] libmachine: (addons-364775)     <pae/>
	I0927 00:15:45.575566   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.575571   22923 main.go:141] libmachine: (addons-364775)   </features>
	I0927 00:15:45.575576   22923 main.go:141] libmachine: (addons-364775)   <cpu mode='host-passthrough'>
	I0927 00:15:45.575582   22923 main.go:141] libmachine: (addons-364775)   
	I0927 00:15:45.575591   22923 main.go:141] libmachine: (addons-364775)   </cpu>
	I0927 00:15:45.575601   22923 main.go:141] libmachine: (addons-364775)   <os>
	I0927 00:15:45.575614   22923 main.go:141] libmachine: (addons-364775)     <type>hvm</type>
	I0927 00:15:45.575634   22923 main.go:141] libmachine: (addons-364775)     <boot dev='cdrom'/>
	I0927 00:15:45.575652   22923 main.go:141] libmachine: (addons-364775)     <boot dev='hd'/>
	I0927 00:15:45.575681   22923 main.go:141] libmachine: (addons-364775)     <bootmenu enable='no'/>
	I0927 00:15:45.575702   22923 main.go:141] libmachine: (addons-364775)   </os>
	I0927 00:15:45.575714   22923 main.go:141] libmachine: (addons-364775)   <devices>
	I0927 00:15:45.575723   22923 main.go:141] libmachine: (addons-364775)     <disk type='file' device='cdrom'>
	I0927 00:15:45.575750   22923 main.go:141] libmachine: (addons-364775)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/boot2docker.iso'/>
	I0927 00:15:45.575762   22923 main.go:141] libmachine: (addons-364775)       <target dev='hdc' bus='scsi'/>
	I0927 00:15:45.575772   22923 main.go:141] libmachine: (addons-364775)       <readonly/>
	I0927 00:15:45.575786   22923 main.go:141] libmachine: (addons-364775)     </disk>
	I0927 00:15:45.575799   22923 main.go:141] libmachine: (addons-364775)     <disk type='file' device='disk'>
	I0927 00:15:45.575811   22923 main.go:141] libmachine: (addons-364775)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:15:45.575825   22923 main.go:141] libmachine: (addons-364775)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/addons-364775.rawdisk'/>
	I0927 00:15:45.575836   22923 main.go:141] libmachine: (addons-364775)       <target dev='hda' bus='virtio'/>
	I0927 00:15:45.575845   22923 main.go:141] libmachine: (addons-364775)     </disk>
	I0927 00:15:45.575855   22923 main.go:141] libmachine: (addons-364775)     <interface type='network'>
	I0927 00:15:45.575866   22923 main.go:141] libmachine: (addons-364775)       <source network='mk-addons-364775'/>
	I0927 00:15:45.575877   22923 main.go:141] libmachine: (addons-364775)       <model type='virtio'/>
	I0927 00:15:45.575888   22923 main.go:141] libmachine: (addons-364775)     </interface>
	I0927 00:15:45.575896   22923 main.go:141] libmachine: (addons-364775)     <interface type='network'>
	I0927 00:15:45.575909   22923 main.go:141] libmachine: (addons-364775)       <source network='default'/>
	I0927 00:15:45.575924   22923 main.go:141] libmachine: (addons-364775)       <model type='virtio'/>
	I0927 00:15:45.575936   22923 main.go:141] libmachine: (addons-364775)     </interface>
	I0927 00:15:45.575946   22923 main.go:141] libmachine: (addons-364775)     <serial type='pty'>
	I0927 00:15:45.575957   22923 main.go:141] libmachine: (addons-364775)       <target port='0'/>
	I0927 00:15:45.575966   22923 main.go:141] libmachine: (addons-364775)     </serial>
	I0927 00:15:45.575977   22923 main.go:141] libmachine: (addons-364775)     <console type='pty'>
	I0927 00:15:45.575996   22923 main.go:141] libmachine: (addons-364775)       <target type='serial' port='0'/>
	I0927 00:15:45.576007   22923 main.go:141] libmachine: (addons-364775)     </console>
	I0927 00:15:45.576016   22923 main.go:141] libmachine: (addons-364775)     <rng model='virtio'>
	I0927 00:15:45.576028   22923 main.go:141] libmachine: (addons-364775)       <backend model='random'>/dev/random</backend>
	I0927 00:15:45.576035   22923 main.go:141] libmachine: (addons-364775)     </rng>
	I0927 00:15:45.576045   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.576056   22923 main.go:141] libmachine: (addons-364775)     
	I0927 00:15:45.576064   22923 main.go:141] libmachine: (addons-364775)   </devices>
	I0927 00:15:45.576075   22923 main.go:141] libmachine: (addons-364775) </domain>
	I0927 00:15:45.576084   22923 main.go:141] libmachine: (addons-364775) 
	I0927 00:15:45.581822   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:be:33:ab in network default
	I0927 00:15:45.582377   22923 main.go:141] libmachine: (addons-364775) Ensuring networks are active...
	I0927 00:15:45.582391   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:45.583142   22923 main.go:141] libmachine: (addons-364775) Ensuring network default is active
	I0927 00:15:45.583582   22923 main.go:141] libmachine: (addons-364775) Ensuring network mk-addons-364775 is active
	I0927 00:15:45.584264   22923 main.go:141] libmachine: (addons-364775) Getting domain xml...
	I0927 00:15:45.585015   22923 main.go:141] libmachine: (addons-364775) Creating domain...
	I0927 00:15:46.949358   22923 main.go:141] libmachine: (addons-364775) Waiting to get IP...
	I0927 00:15:46.950076   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:46.950580   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:46.950607   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:46.950544   22945 retry.go:31] will retry after 202.642864ms: waiting for machine to come up
	I0927 00:15:47.155069   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.155563   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.155584   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.155427   22945 retry.go:31] will retry after 370.186358ms: waiting for machine to come up
	I0927 00:15:47.526779   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.527165   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.527193   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.527118   22945 retry.go:31] will retry after 435.004567ms: waiting for machine to come up
	I0927 00:15:47.963669   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:47.964030   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:47.964059   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:47.963977   22945 retry.go:31] will retry after 546.011839ms: waiting for machine to come up
	I0927 00:15:48.511601   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:48.512026   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:48.512071   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:48.511990   22945 retry.go:31] will retry after 469.054965ms: waiting for machine to come up
	I0927 00:15:48.982621   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:48.982989   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:48.983018   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:48.982935   22945 retry.go:31] will retry after 651.072969ms: waiting for machine to come up
	I0927 00:15:49.635407   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:49.635833   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:49.635868   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:49.635780   22945 retry.go:31] will retry after 787.572834ms: waiting for machine to come up
	I0927 00:15:50.425318   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:50.425646   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:50.425674   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:50.425607   22945 retry.go:31] will retry after 1.14927096s: waiting for machine to come up
	I0927 00:15:51.576285   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:51.576584   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:51.576610   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:51.576552   22945 retry.go:31] will retry after 1.476584274s: waiting for machine to come up
	I0927 00:15:53.055137   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:53.055575   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:53.055599   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:53.055538   22945 retry.go:31] will retry after 1.729538445s: waiting for machine to come up
	I0927 00:15:54.786058   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:54.786491   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:54.786519   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:54.786450   22945 retry.go:31] will retry after 2.631307121s: waiting for machine to come up
	I0927 00:15:57.421088   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:15:57.421427   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:15:57.421454   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:15:57.421379   22945 retry.go:31] will retry after 2.652911492s: waiting for machine to come up
	I0927 00:16:00.075506   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:00.075951   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:16:00.075981   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:16:00.075893   22945 retry.go:31] will retry after 3.30922874s: waiting for machine to come up
	I0927 00:16:03.388283   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:03.388607   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find current IP address of domain addons-364775 in network mk-addons-364775
	I0927 00:16:03.388628   22923 main.go:141] libmachine: (addons-364775) DBG | I0927 00:16:03.388576   22945 retry.go:31] will retry after 3.510064019s: waiting for machine to come up
	I0927 00:16:06.901968   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.902384   22923 main.go:141] libmachine: (addons-364775) Found IP for machine: 192.168.39.169
	I0927 00:16:06.902410   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has current primary IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.902418   22923 main.go:141] libmachine: (addons-364775) Reserving static IP address...
	I0927 00:16:06.902791   22923 main.go:141] libmachine: (addons-364775) DBG | unable to find host DHCP lease matching {name: "addons-364775", mac: "52:54:00:e5:bb:bf", ip: "192.168.39.169"} in network mk-addons-364775
	I0927 00:16:06.970142   22923 main.go:141] libmachine: (addons-364775) Reserved static IP address: 192.168.39.169
	I0927 00:16:06.970170   22923 main.go:141] libmachine: (addons-364775) Waiting for SSH to be available...
	I0927 00:16:06.970179   22923 main.go:141] libmachine: (addons-364775) DBG | Getting to WaitForSSH function...
	I0927 00:16:06.972291   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.972697   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:06.972723   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:06.972887   22923 main.go:141] libmachine: (addons-364775) DBG | Using SSH client type: external
	I0927 00:16:06.972906   22923 main.go:141] libmachine: (addons-364775) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa (-rw-------)
	I0927 00:16:06.972933   22923 main.go:141] libmachine: (addons-364775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:16:06.972951   22923 main.go:141] libmachine: (addons-364775) DBG | About to run SSH command:
	I0927 00:16:06.972962   22923 main.go:141] libmachine: (addons-364775) DBG | exit 0
	I0927 00:16:07.103385   22923 main.go:141] libmachine: (addons-364775) DBG | SSH cmd err, output: <nil>: 
	I0927 00:16:07.103681   22923 main.go:141] libmachine: (addons-364775) KVM machine creation complete!
	I0927 00:16:07.103911   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:16:07.104438   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:07.104611   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:07.104753   22923 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:16:07.104765   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:07.105844   22923 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:16:07.105857   22923 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:16:07.105862   22923 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:16:07.105867   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.107901   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.108215   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.108246   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.108338   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.108493   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.108634   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.108761   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.108901   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.109070   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.109080   22923 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:16:07.218435   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:16:07.218469   22923 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:16:07.218478   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.221204   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.221494   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.221517   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.221683   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.221860   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.222017   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.222134   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.222276   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.222428   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.222439   22923 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:16:07.332074   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:16:07.332151   22923 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:16:07.332158   22923 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:16:07.332165   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.332377   22923 buildroot.go:166] provisioning hostname "addons-364775"
	I0927 00:16:07.332406   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.332594   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.334888   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.335193   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.335220   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.335325   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.335483   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.335621   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.335776   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.335956   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.336121   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.336143   22923 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-364775 && echo "addons-364775" | sudo tee /etc/hostname
	I0927 00:16:07.457193   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-364775
	
	I0927 00:16:07.457219   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.459657   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.459964   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.459992   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.460170   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.460303   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.460415   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.460529   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.460689   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.460874   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.460892   22923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-364775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-364775/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-364775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:16:07.576205   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:16:07.576252   22923 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:16:07.576312   22923 buildroot.go:174] setting up certificates
	I0927 00:16:07.576329   22923 provision.go:84] configureAuth start
	I0927 00:16:07.576347   22923 main.go:141] libmachine: (addons-364775) Calling .GetMachineName
	I0927 00:16:07.576623   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:07.579617   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.579974   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.580000   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.580131   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.582401   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.582745   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.582770   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.582903   22923 provision.go:143] copyHostCerts
	I0927 00:16:07.582979   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:16:07.583120   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:16:07.583203   22923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:16:07.583299   22923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.addons-364775 san=[127.0.0.1 192.168.39.169 addons-364775 localhost minikube]
	I0927 00:16:07.704457   22923 provision.go:177] copyRemoteCerts
	I0927 00:16:07.704522   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:16:07.704551   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.707097   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.707455   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.707485   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.707628   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.707808   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.707921   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.708037   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:07.793441   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:16:07.816635   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:16:07.839412   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:16:07.861848   22923 provision.go:87] duration metric: took 285.503545ms to configureAuth
	I0927 00:16:07.861873   22923 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:16:07.862050   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:07.862134   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:07.864754   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.865082   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:07.865107   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:07.865293   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:07.865475   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.865626   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:07.865739   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:07.865871   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:07.866074   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:07.866090   22923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:16:08.093802   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:16:08.093837   22923 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:16:08.093848   22923 main.go:141] libmachine: (addons-364775) Calling .GetURL
	I0927 00:16:08.095002   22923 main.go:141] libmachine: (addons-364775) DBG | Using libvirt version 6000000
	I0927 00:16:08.097051   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.097385   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.097422   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.097515   22923 main.go:141] libmachine: Docker is up and running!
	I0927 00:16:08.097527   22923 main.go:141] libmachine: Reticulating splines...
	I0927 00:16:08.097535   22923 client.go:171] duration metric: took 23.479752106s to LocalClient.Create
	I0927 00:16:08.097566   22923 start.go:167] duration metric: took 23.479821174s to libmachine.API.Create "addons-364775"
	I0927 00:16:08.097589   22923 start.go:293] postStartSetup for "addons-364775" (driver="kvm2")
	I0927 00:16:08.097606   22923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:16:08.097627   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.097833   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:16:08.097854   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.099703   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.099981   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.100006   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.100126   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.100298   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.100435   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.100561   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.186017   22923 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:16:08.190011   22923 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:16:08.190031   22923 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:16:08.190101   22923 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:16:08.190129   22923 start.go:296] duration metric: took 92.527439ms for postStartSetup
	I0927 00:16:08.190155   22923 main.go:141] libmachine: (addons-364775) Calling .GetConfigRaw
	I0927 00:16:08.190759   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:08.193058   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.193355   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.193381   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.193557   22923 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/config.json ...
	I0927 00:16:08.193708   22923 start.go:128] duration metric: took 23.593238722s to createHost
	I0927 00:16:08.193728   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.195773   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.196120   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.196166   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.196300   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.196468   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.196582   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.196721   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.196856   22923 main.go:141] libmachine: Using SSH client type: native
	I0927 00:16:08.197036   22923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0927 00:16:08.197048   22923 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:16:08.303996   22923 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727396168.279190965
	
	I0927 00:16:08.304020   22923 fix.go:216] guest clock: 1727396168.279190965
	I0927 00:16:08.304027   22923 fix.go:229] Guest: 2024-09-27 00:16:08.279190965 +0000 UTC Remote: 2024-09-27 00:16:08.193719171 +0000 UTC m=+23.688310296 (delta=85.471794ms)
	I0927 00:16:08.304044   22923 fix.go:200] guest clock delta is within tolerance: 85.471794ms
	I0927 00:16:08.304048   22923 start.go:83] releasing machines lock for "addons-364775", held for 23.703640756s
	I0927 00:16:08.304069   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.304317   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:08.306988   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.307381   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.307407   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.307561   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.307997   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.308150   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:08.308237   22923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:16:08.308288   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.308351   22923 ssh_runner.go:195] Run: cat /version.json
	I0927 00:16:08.308378   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:08.310668   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.310969   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.310997   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.311014   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.311153   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.311324   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.311389   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:08.311408   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:08.311461   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.311590   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:08.311614   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.311722   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:08.311824   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:08.311953   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:08.388567   22923 ssh_runner.go:195] Run: systemctl --version
	I0927 00:16:08.413004   22923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:16:08.574576   22923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:16:08.581322   22923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:16:08.581391   22923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:16:08.597487   22923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:16:08.597509   22923 start.go:495] detecting cgroup driver to use...
	I0927 00:16:08.597566   22923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:16:08.612247   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:16:08.625077   22923 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:16:08.625130   22923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:16:08.637473   22923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:16:08.650051   22923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:16:08.758188   22923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:16:08.913236   22923 docker.go:233] disabling docker service ...
	I0927 00:16:08.913320   22923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:16:08.927426   22923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:16:08.940272   22923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:16:09.057168   22923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:16:09.169370   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:16:09.184123   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:16:09.202228   22923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:16:09.202290   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.212677   22923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:16:09.212740   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.223105   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.233431   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.243818   22923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:16:09.254480   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.265026   22923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.282615   22923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:16:09.293542   22923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:16:09.303356   22923 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:16:09.303424   22923 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:16:09.315981   22923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:16:09.325606   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:09.439247   22923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:16:09.527367   22923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:16:09.527468   22923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:16:09.532165   22923 start.go:563] Will wait 60s for crictl version
	I0927 00:16:09.532216   22923 ssh_runner.go:195] Run: which crictl
	I0927 00:16:09.535820   22923 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:16:09.572264   22923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:16:09.572401   22923 ssh_runner.go:195] Run: crio --version
	I0927 00:16:09.599589   22923 ssh_runner.go:195] Run: crio --version
	I0927 00:16:09.627068   22923 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:16:09.628232   22923 main.go:141] libmachine: (addons-364775) Calling .GetIP
	I0927 00:16:09.630667   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:09.630995   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:09.631023   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:09.631180   22923 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:16:09.635187   22923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:16:09.647618   22923 kubeadm.go:883] updating cluster {Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:16:09.647751   22923 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:16:09.647799   22923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:16:09.680511   22923 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 00:16:09.680588   22923 ssh_runner.go:195] Run: which lz4
	I0927 00:16:09.684511   22923 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 00:16:09.688651   22923 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 00:16:09.688692   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 00:16:10.959682   22923 crio.go:462] duration metric: took 1.275200656s to copy over tarball
	I0927 00:16:10.959746   22923 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 00:16:13.025278   22923 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.065510814s)
	I0927 00:16:13.025311   22923 crio.go:469] duration metric: took 2.065601709s to extract the tarball
	I0927 00:16:13.025322   22923 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 00:16:13.061932   22923 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:16:13.107912   22923 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:16:13.107939   22923 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:16:13.107947   22923 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.1 crio true true} ...
	I0927 00:16:13.108033   22923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-364775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:16:13.108095   22923 ssh_runner.go:195] Run: crio config
	I0927 00:16:13.153533   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:16:13.153555   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:16:13.153566   22923 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:16:13.153586   22923 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-364775 NodeName:addons-364775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:16:13.153691   22923 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-364775"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:16:13.153746   22923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:16:13.163635   22923 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:16:13.163702   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:16:13.172959   22923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 00:16:13.190510   22923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:16:13.207214   22923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0927 00:16:13.224712   22923 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I0927 00:16:13.228436   22923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:16:13.241465   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:13.367179   22923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:16:13.383473   22923 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775 for IP: 192.168.39.169
	I0927 00:16:13.383499   22923 certs.go:194] generating shared ca certs ...
	I0927 00:16:13.383515   22923 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.383652   22923 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:16:13.575678   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt ...
	I0927 00:16:13.575704   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt: {Name:mk3ad08ac2703aff467792f34abbf756e11c2872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.575901   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key ...
	I0927 00:16:13.575916   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key: {Name:mkab43d698e5658555844624b3079e901a8ded86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.576010   22923 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:16:13.751373   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt ...
	I0927 00:16:13.751404   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt: {Name:mk8e225d38c1311b0e8a7348aa1fbee6e6fcbd70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.751579   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key ...
	I0927 00:16:13.751594   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key: {Name:mk81ac2481482dece22299e0ff67c97675fb9f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.751685   22923 certs.go:256] generating profile certs ...
	I0927 00:16:13.751745   22923 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key
	I0927 00:16:13.751759   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt with IP's: []
	I0927 00:16:13.996696   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt ...
	I0927 00:16:13.996728   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: {Name:mk4647826e81f09b562e4b6468be9da247fcab9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.996908   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key ...
	I0927 00:16:13.996922   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.key: {Name:mkdba807b5f103e151ba37e1747e2a749b1980c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:13.997015   22923 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee
	I0927 00:16:13.997035   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.169]
	I0927 00:16:14.144098   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee ...
	I0927 00:16:14.144127   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee: {Name:mkf743df3d4ae64c9bb8f8a6ebe4e814cf609961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.144305   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee ...
	I0927 00:16:14.144321   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee: {Name:mk43e7a262458556d97385e524b4828b4b015bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.144397   22923 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt.9c90c6ee -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt
	I0927 00:16:14.144467   22923 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key.9c90c6ee -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key
	I0927 00:16:14.144516   22923 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key
	I0927 00:16:14.144533   22923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt with IP's: []
	I0927 00:16:14.217209   22923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt ...
	I0927 00:16:14.217236   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt: {Name:mk44b3f8e9e129ec5865925167df941ba0f63291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.217379   22923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key ...
	I0927 00:16:14.217389   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key: {Name:mkc2dd610a10002245981e0f1a9de7854a330937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:14.217536   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:16:14.217567   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:16:14.217589   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:16:14.217611   22923 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:16:14.218138   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:16:14.245205   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:16:14.273590   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:16:14.299930   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:16:14.322526   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:16:14.345010   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:16:14.368388   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:16:14.391414   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:16:14.413864   22923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:16:14.435858   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:16:14.451548   22923 ssh_runner.go:195] Run: openssl version
	I0927 00:16:14.457242   22923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:16:14.467943   22923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.472191   22923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.472238   22923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:16:14.477640   22923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:16:14.488010   22923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:16:14.491811   22923 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:16:14.491855   22923 kubeadm.go:392] StartCluster: {Name:addons-364775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-364775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:16:14.491924   22923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:16:14.491960   22923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:16:14.524680   22923 cri.go:89] found id: ""
	I0927 00:16:14.524743   22923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:16:14.534145   22923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:16:14.545428   22923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:16:14.556318   22923 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:16:14.556338   22923 kubeadm.go:157] found existing configuration files:
	
	I0927 00:16:14.556375   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:16:14.566224   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:16:14.566269   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:16:14.576303   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:16:14.585129   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:16:14.585171   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:16:14.594747   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:16:14.603457   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:16:14.603496   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:16:14.612663   22923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:16:14.621624   22923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:16:14.621668   22923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:16:14.631182   22923 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 00:16:14.689680   22923 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:16:14.689907   22923 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:16:14.787642   22923 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:16:14.787844   22923 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:16:14.787981   22923 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:16:14.796210   22923 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:16:14.933571   22923 out.go:235]   - Generating certificates and keys ...
	I0927 00:16:14.933713   22923 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:16:14.933803   22923 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:16:14.933906   22923 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:16:15.129675   22923 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:16:15.193399   22923 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:16:15.313134   22923 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:16:15.654187   22923 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:16:15.654296   22923 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-364775 localhost] and IPs [192.168.39.169 127.0.0.1 ::1]
	I0927 00:16:15.765696   22923 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:16:15.765874   22923 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-364775 localhost] and IPs [192.168.39.169 127.0.0.1 ::1]
	I0927 00:16:16.013868   22923 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:16:16.165681   22923 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:16:16.447703   22923 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:16:16.447794   22923 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:16:16.592680   22923 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:16:16.720016   22923 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:16:16.929585   22923 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:16:17.262835   22923 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:16:17.402806   22923 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:16:17.403246   22923 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:16:17.407265   22923 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:16:17.409098   22923 out.go:235]   - Booting up control plane ...
	I0927 00:16:17.409215   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:16:17.409290   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:16:17.410016   22923 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:16:17.425105   22923 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:16:17.433605   22923 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:16:17.433674   22923 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:16:17.565381   22923 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:16:17.565569   22923 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:16:19.065179   22923 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501169114s
	I0927 00:16:19.065301   22923 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:16:24.064418   22923 kubeadm.go:310] [api-check] The API server is healthy after 5.001577374s
	I0927 00:16:24.076690   22923 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:16:24.099966   22923 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:16:24.127484   22923 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:16:24.127678   22923 kubeadm.go:310] [mark-control-plane] Marking the node addons-364775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:16:24.140308   22923 kubeadm.go:310] [bootstrap-token] Using token: pa4b34.sdki52w2nqhs0c2a
	I0927 00:16:24.141673   22923 out.go:235]   - Configuring RBAC rules ...
	I0927 00:16:24.141825   22923 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:16:24.147166   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:16:24.155898   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:16:24.161743   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:16:24.165824   22923 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:16:24.168837   22923 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:16:24.472788   22923 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:16:24.898245   22923 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:16:25.470513   22923 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:16:25.471447   22923 kubeadm.go:310] 
	I0927 00:16:25.471556   22923 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:16:25.471575   22923 kubeadm.go:310] 
	I0927 00:16:25.471666   22923 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:16:25.471676   22923 kubeadm.go:310] 
	I0927 00:16:25.471699   22923 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:16:25.471877   22923 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:16:25.471929   22923 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:16:25.471935   22923 kubeadm.go:310] 
	I0927 00:16:25.471976   22923 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:16:25.471982   22923 kubeadm.go:310] 
	I0927 00:16:25.472038   22923 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:16:25.472051   22923 kubeadm.go:310] 
	I0927 00:16:25.472141   22923 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:16:25.472326   22923 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:16:25.472450   22923 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:16:25.472464   22923 kubeadm.go:310] 
	I0927 00:16:25.472573   22923 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:16:25.472648   22923 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:16:25.472666   22923 kubeadm.go:310] 
	I0927 00:16:25.472805   22923 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pa4b34.sdki52w2nqhs0c2a \
	I0927 00:16:25.472942   22923 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 00:16:25.472971   22923 kubeadm.go:310] 	--control-plane 
	I0927 00:16:25.472980   22923 kubeadm.go:310] 
	I0927 00:16:25.473098   22923 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:16:25.473107   22923 kubeadm.go:310] 
	I0927 00:16:25.473226   22923 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pa4b34.sdki52w2nqhs0c2a \
	I0927 00:16:25.473365   22923 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 00:16:25.474005   22923 kubeadm.go:310] W0927 00:16:14.668581     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:16:25.474358   22923 kubeadm.go:310] W0927 00:16:14.670545     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:16:25.474505   22923 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:16:25.474538   22923 cni.go:84] Creating CNI manager for ""
	I0927 00:16:25.474550   22923 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:16:25.476900   22923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 00:16:25.477915   22923 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 00:16:25.488407   22923 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 00:16:25.508648   22923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:16:25.508704   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:25.508750   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-364775 minikube.k8s.io/updated_at=2024_09_27T00_16_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-364775 minikube.k8s.io/primary=true
	I0927 00:16:25.526229   22923 ops.go:34] apiserver oom_adj: -16
	I0927 00:16:25.629503   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:26.130228   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:26.629915   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:27.130024   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:27.630537   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:28.130314   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:28.630463   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:29.130429   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:29.630477   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:30.129687   22923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:16:30.257341   22923 kubeadm.go:1113] duration metric: took 4.748689071s to wait for elevateKubeSystemPrivileges
	I0927 00:16:30.257376   22923 kubeadm.go:394] duration metric: took 15.765523535s to StartCluster
	I0927 00:16:30.257393   22923 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:30.257497   22923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:16:30.257927   22923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:16:30.258123   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:16:30.258153   22923 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:16:30.258207   22923 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:16:30.258332   22923 addons.go:69] Setting yakd=true in profile "addons-364775"
	I0927 00:16:30.258343   22923 addons.go:69] Setting metrics-server=true in profile "addons-364775"
	I0927 00:16:30.258356   22923 addons.go:234] Setting addon yakd=true in "addons-364775"
	I0927 00:16:30.258357   22923 addons.go:69] Setting storage-provisioner=true in profile "addons-364775"
	I0927 00:16:30.258336   22923 addons.go:69] Setting cloud-spanner=true in profile "addons-364775"
	I0927 00:16:30.258373   22923 addons.go:234] Setting addon storage-provisioner=true in "addons-364775"
	I0927 00:16:30.258378   22923 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-364775"
	I0927 00:16:30.258389   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258398   22923 addons.go:69] Setting ingress=true in profile "addons-364775"
	I0927 00:16:30.258398   22923 addons.go:69] Setting default-storageclass=true in profile "addons-364775"
	I0927 00:16:30.258418   22923 addons.go:69] Setting registry=true in profile "addons-364775"
	I0927 00:16:30.258421   22923 addons.go:69] Setting ingress-dns=true in profile "addons-364775"
	I0927 00:16:30.258424   22923 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-364775"
	I0927 00:16:30.258430   22923 addons.go:234] Setting addon registry=true in "addons-364775"
	I0927 00:16:30.258431   22923 addons.go:234] Setting addon ingress-dns=true in "addons-364775"
	I0927 00:16:30.258439   22923 addons.go:69] Setting volcano=true in profile "addons-364775"
	I0927 00:16:30.258444   22923 addons.go:69] Setting inspektor-gadget=true in profile "addons-364775"
	I0927 00:16:30.258449   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258453   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258461   22923 addons.go:234] Setting addon inspektor-gadget=true in "addons-364775"
	I0927 00:16:30.258460   22923 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-364775"
	I0927 00:16:30.258465   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258475   22923 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-364775"
	I0927 00:16:30.258499   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258390   22923 addons.go:234] Setting addon cloud-spanner=true in "addons-364775"
	I0927 00:16:30.258875   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258880   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258887   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.258897   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258901   22923 addons.go:69] Setting volumesnapshots=true in profile "addons-364775"
	I0927 00:16:30.258904   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258890   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258911   22923 addons.go:234] Setting addon volumesnapshots=true in "addons-364775"
	I0927 00:16:30.258428   22923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-364775"
	I0927 00:16:30.258921   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258928   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258400   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.259165   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:30.259243   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259268   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258361   22923 addons.go:234] Setting addon metrics-server=true in "addons-364775"
	I0927 00:16:30.259320   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258902   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259250   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259345   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259362   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258453   22923 addons.go:234] Setting addon volcano=true in "addons-364775"
	I0927 00:16:30.258410   22923 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-364775"
	I0927 00:16:30.258413   22923 addons.go:234] Setting addon ingress=true in "addons-364775"
	I0927 00:16:30.259414   22923 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-364775"
	I0927 00:16:30.259433   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.258910   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259599   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259320   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259681   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259711   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259756   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.259785   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.258365   22923 addons.go:69] Setting gcp-auth=true in profile "addons-364775"
	I0927 00:16:30.259994   22923 mustload.go:65] Loading cluster: addons-364775
	I0927 00:16:30.258890   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.260064   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.259324   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.260148   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.260171   22923 config.go:182] Loaded profile config "addons-364775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:16:30.260185   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.259686   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.260638   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.261042   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.261076   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.261496   22923 out.go:177] * Verifying Kubernetes components...
	I0927 00:16:30.263120   22923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:16:30.279959   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33325
	I0927 00:16:30.280209   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0927 00:16:30.280226   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0927 00:16:30.280238   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0927 00:16:30.280556   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.280907   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281016   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281058   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281074   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281083   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.281341   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281358   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281459   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.281511   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281523   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281582   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.281595   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.281715   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.281946   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.281986   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.282089   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.282113   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.282682   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.282737   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.295448   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
	I0927 00:16:30.295465   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0927 00:16:30.295466   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.295577   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0927 00:16:30.295763   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.295797   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.295961   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.295991   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.296110   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.296144   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.297516   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.297610   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.297662   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.298165   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298183   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298204   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298220   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298319   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.298333   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.298708   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.298770   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.299230   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.299374   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.299396   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.299799   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.299837   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.320873   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0927 00:16:30.321467   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.322017   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.322035   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.322375   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.322557   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.324241   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.325648   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0927 00:16:30.326669   22923 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:16:30.328052   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:16:30.328068   22923 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:16:30.328087   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.330977   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.331478   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.331497   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.331615   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.331743   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0927 00:16:30.331928   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.331988   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.332189   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.332466   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.332484   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.332544   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.332815   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.333331   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.333369   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.333610   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0927 00:16:30.334115   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.334676   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.334692   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.334922   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0927 00:16:30.335061   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.335224   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.335329   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.335871   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.335915   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.337783   22923 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-364775"
	I0927 00:16:30.337824   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.338180   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.338211   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.341852   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38853
	I0927 00:16:30.341872   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.341955   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.341960   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0927 00:16:30.341962   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.341971   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.342027   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0927 00:16:30.342336   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.342379   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.342477   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343236   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343339   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34911
	I0927 00:16:30.343344   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.343360   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.343418   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.343490   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.343875   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.343889   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.344011   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.344032   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.344084   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.344875   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0927 00:16:30.344918   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.344961   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.344877   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.345471   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.345494   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.345704   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.345804   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.345923   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.345934   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346060   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.346070   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346180   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.346193   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.346254   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346296   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346481   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346533   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.346738   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.346786   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.346944   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.347106   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.347423   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.347470   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.347990   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.348013   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.348711   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.348979   22923 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:16:30.350262   22923 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:16:30.350711   22923 addons.go:234] Setting addon default-storageclass=true in "addons-364775"
	I0927 00:16:30.350752   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:30.351080   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.351116   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.351946   22923 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:16:30.351964   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:16:30.351981   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.352035   22923 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:16:30.353597   22923 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:16:30.353615   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:16:30.353635   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.354349   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0927 00:16:30.354872   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.355446   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.355462   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.355832   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.356428   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.356465   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.356580   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.357770   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.357938   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.357955   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.358350   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.358661   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0927 00:16:30.358801   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.358854   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.358868   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.359073   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.359151   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.359281   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.359652   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.359671   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.359714   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.360052   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.360131   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.360290   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.360338   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.360850   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.361885   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.364200   22923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:16:30.365464   22923 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:16:30.365488   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:16:30.365507   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.366308   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0927 00:16:30.366791   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.367379   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.367401   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.367750   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.367938   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.369060   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.369690   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.369710   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.370066   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.370129   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.370398   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.370694   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.370823   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.371120   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0927 00:16:30.371610   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.372218   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.372236   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.372530   22923 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:16:30.373808   22923 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:16:30.373825   22923 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:16:30.373842   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.373856   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I0927 00:16:30.374333   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.374903   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.374922   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.375279   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.375482   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.376723   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.377131   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.377149   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.377335   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.377382   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.377887   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.378054   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0927 00:16:30.378172   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.378338   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.378377   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.378649   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.378704   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.379547   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:16:30.379740   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.379756   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.380077   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.380239   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.380301   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.380762   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:16:30.380786   22923 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:16:30.380803   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.382146   22923 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:16:30.383631   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I0927 00:16:30.383639   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.383821   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:30.383832   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:30.383878   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.383963   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.384125   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.384142   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.384162   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:30.384182   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:30.384189   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:30.384196   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:30.384202   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:30.384331   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.384494   22923 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:16:30.384504   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:16:30.384518   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.384569   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:30.384585   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:30.384591   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	W0927 00:16:30.384653   22923 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0927 00:16:30.384914   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.385028   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.385170   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.385526   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.385545   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.386176   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.386427   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.388475   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
	I0927 00:16:30.388774   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.389050   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.389164   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.389180   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.389505   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.389567   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.389583   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.390108   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.390148   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.390345   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.390534   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.390650   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.390712   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.390753   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.392001   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38025
	I0927 00:16:30.392318   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.392824   22923 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:16:30.392877   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.392887   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.393218   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.393656   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:30.393690   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:30.395193   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:16:30.395209   22923 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:16:30.395225   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.396435   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0927 00:16:30.396951   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.397552   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.397567   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.397947   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.398120   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.398753   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.399064   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.399083   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.399238   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.399500   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.399555   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
	I0927 00:16:30.399676   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.399899   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.400083   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.400154   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.400820   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.400837   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.401205   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.401221   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:16:30.401414   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.403906   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:30.404106   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34281
	I0927 00:16:30.404221   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.404635   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.404663   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
	I0927 00:16:30.405161   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.405182   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.405366   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.405583   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.405846   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.405996   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.406014   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.406064   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:16:30.406314   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.406621   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.406763   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:30.407772   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:16:30.408030   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.408199   22923 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:16:30.408220   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:16:30.408236   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.409701   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:16:30.409716   22923 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:16:30.411228   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:16:30.411370   22923 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:16:30.411387   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:16:30.411406   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.411488   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.411504   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.411531   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.411552   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.411643   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.411769   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.411918   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.413714   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:16:30.414595   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.415012   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.415065   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.415352   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.415527   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.415645   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.415756   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.416372   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:16:30.417721   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:16:30.418988   22923 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:16:30.420195   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:16:30.420214   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:16:30.420244   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.422864   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0927 00:16:30.423200   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.423340   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.423691   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.423710   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.423879   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.424016   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.424026   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.424200   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.424330   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.424366   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.424489   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.424704   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.424757   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I0927 00:16:30.425411   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:30.425899   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:30.425917   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:30.426087   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.426195   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:30.426431   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:30.427706   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:30.427748   22923 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:16:30.427917   22923 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:16:30.427928   22923 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:16:30.427942   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.430541   22923 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:16:30.431106   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.431591   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.431613   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.431738   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.431872   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.431985   22923 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:16:30.431995   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:16:30.432008   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.432009   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:30.432127   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	W0927 00:16:30.434191   22923 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47452->192.168.39.169:22: read: connection reset by peer
	I0927 00:16:30.434217   22923 retry.go:31] will retry after 235.279035ms: ssh: handshake failed: read tcp 192.168.39.1:47452->192.168.39.169:22: read: connection reset by peer
	I0927 00:16:30.434586   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.435008   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:30.435093   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:30.435225   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:30.435381   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:30.435528   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:30.435630   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:30.687382   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:16:30.687407   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:16:30.703808   22923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:16:30.703964   22923 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:16:30.766082   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:16:30.766106   22923 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:16:30.789375   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:16:30.789397   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:16:30.817986   22923 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:16:30.818010   22923 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:16:30.818453   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:16:30.818687   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:16:30.820723   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:16:30.820738   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:16:30.838202   22923 node_ready.go:35] waiting up to 6m0s for node "addons-364775" to be "Ready" ...
	I0927 00:16:30.841116   22923 node_ready.go:49] node "addons-364775" has status "Ready":"True"
	I0927 00:16:30.841135   22923 node_ready.go:38] duration metric: took 2.9055ms for node "addons-364775" to be "Ready" ...
	I0927 00:16:30.841142   22923 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:16:30.845387   22923 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:16:30.845426   22923 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:16:30.848404   22923 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:30.890816   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:16:30.919824   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:16:30.919846   22923 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:16:30.923045   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:16:30.930150   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:16:30.969174   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:16:30.986771   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:16:30.986796   22923 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:16:31.024820   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:16:31.024848   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:16:31.048974   22923 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:16:31.048999   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:16:31.060405   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:16:31.060436   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:16:31.087170   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:16:31.097441   22923 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:16:31.097468   22923 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:16:31.123704   22923 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:16:31.123728   22923 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:16:31.127243   22923 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:16:31.127257   22923 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:16:31.181768   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:16:31.181799   22923 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:16:31.198013   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:16:31.198040   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:16:31.230188   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:16:31.240969   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:16:31.337457   22923 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:16:31.337486   22923 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:16:31.340360   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:16:31.340378   22923 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:16:31.357490   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:16:31.357519   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:16:31.438275   22923 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:16:31.438302   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:16:31.479034   22923 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:31.479054   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:16:31.506932   22923 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:16:31.506952   22923 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:16:31.551476   22923 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:16:31.551508   22923 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:16:31.628698   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:16:31.817687   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:31.844064   22923 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:16:31.844092   22923 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:16:32.141105   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:16:32.141141   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:16:32.314746   22923 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:16:32.314778   22923 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:16:32.430650   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:16:32.430679   22923 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:16:32.500643   22923 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:16:32.500669   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:16:32.618286   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:16:32.618306   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:16:32.776416   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:16:32.854014   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:32.980645   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:16:32.980665   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:16:32.984476   22923 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.280478347s)
	I0927 00:16:32.984507   22923 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 00:16:33.214546   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.396058946s)
	I0927 00:16:33.214590   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:33.214603   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:33.214847   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:33.214864   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:33.214872   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:33.214879   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:33.215068   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:33.215082   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:33.399888   22923 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:16:33.399914   22923 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:16:33.488059   22923 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-364775" context rescaled to 1 replicas
	I0927 00:16:33.660690   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:16:35.195794   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:36.275637   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.456917037s)
	I0927 00:16:36.275696   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.275710   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.275974   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:36.275983   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.275997   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:36.276006   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.276024   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.276207   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.276219   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:36.365139   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:36.365161   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:36.365407   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:36.365451   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:36.365468   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.395951   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:37.431653   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:16:37.431693   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:37.434730   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.435197   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:37.435228   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.435424   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:37.435670   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:37.435829   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:37.436039   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:37.781071   22923 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:16:37.864137   22923 addons.go:234] Setting addon gcp-auth=true in "addons-364775"
	I0927 00:16:37.864191   22923 host.go:66] Checking if "addons-364775" exists ...
	I0927 00:16:37.864599   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:37.864634   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:37.880453   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0927 00:16:37.881363   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:37.881837   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:37.881864   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:37.882238   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:37.882781   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:16:37.882817   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:16:37.897834   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0927 00:16:37.898272   22923 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:16:37.898755   22923 main.go:141] libmachine: Using API Version  1
	I0927 00:16:37.898780   22923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:16:37.899107   22923 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:16:37.899270   22923 main.go:141] libmachine: (addons-364775) Calling .GetState
	I0927 00:16:37.900885   22923 main.go:141] libmachine: (addons-364775) Calling .DriverName
	I0927 00:16:37.901107   22923 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:16:37.901127   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHHostname
	I0927 00:16:37.903699   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.904060   22923 main.go:141] libmachine: (addons-364775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:bb:bf", ip: ""} in network mk-addons-364775: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:59 +0000 UTC Type:0 Mac:52:54:00:e5:bb:bf Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:addons-364775 Clientid:01:52:54:00:e5:bb:bf}
	I0927 00:16:37.904077   22923 main.go:141] libmachine: (addons-364775) DBG | domain addons-364775 has defined IP address 192.168.39.169 and MAC address 52:54:00:e5:bb:bf in network mk-addons-364775
	I0927 00:16:37.904235   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHPort
	I0927 00:16:37.904402   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHKeyPath
	I0927 00:16:37.904533   22923 main.go:141] libmachine: (addons-364775) Calling .GetSSHUsername
	I0927 00:16:37.904663   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/addons-364775/id_rsa Username:docker}
	I0927 00:16:37.975730   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.084875146s)
	I0927 00:16:37.975779   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975780   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.05270093s)
	I0927 00:16:37.975818   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975836   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975874   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.006677684s)
	I0927 00:16:37.975909   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975920   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975923   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.888722405s)
	I0927 00:16:37.975952   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975969   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.975818   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.045636678s)
	I0927 00:16:37.975983   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.745766683s)
	I0927 00:16:37.975995   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976002   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976007   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.975792   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976021   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976074   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.735080002s)
	I0927 00:16:37.976097   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976107   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976192   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.347467613s)
	I0927 00:16:37.976207   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976215   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976527   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976558   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976566   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976571   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976580   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976582   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976587   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976601   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976608   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976615   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976613   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976622   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976647   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976654   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976663   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976666   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976672   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976684   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976691   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976698   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976704   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976738   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.976754   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976760   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976797   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976808   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976838   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976846   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976854   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976859   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.976875   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.976886   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.976894   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.976901   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.977215   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.977240   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.977246   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.977255   22923 addons.go:475] Verifying addon ingress=true in "addons-364775"
	I0927 00:16:37.977437   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.977460   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.977465   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978150   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978180   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978194   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.978192   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.978204   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978219   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.978225   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.978361   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.979355   22923 out.go:177] * Verifying ingress addon...
	I0927 00:16:37.979514   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.979525   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.979768   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:37.979826   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.979832   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.979841   22923 addons.go:475] Verifying addon metrics-server=true in "addons-364775"
	I0927 00:16:37.980269   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980280   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.980288   22923 addons.go:475] Verifying addon registry=true in "addons-364775"
	I0927 00:16:37.980455   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980746   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.980760   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:37.980768   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:37.980971   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:37.980987   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:37.981985   22923 out.go:177] * Verifying registry addon...
	I0927 00:16:37.981995   22923 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-364775 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:16:37.982403   22923 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:16:37.983991   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:16:38.027140   22923 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:16:38.027164   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:38.027861   22923 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:16:38.027884   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.131340   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.131369   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.131619   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.131639   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:38.551465   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:38.551901   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.905728   22923 pod_ready.go:93] pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:38.905752   22923 pod_ready.go:82] duration metric: took 8.057329101s for pod "coredns-7c65d6cfc9-gd2h2" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:38.905762   22923 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:38.947750   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.130011838s)
	W0927 00:16:38.947809   22923 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:38.947833   22923 retry.go:31] will retry after 183.128394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:38.947854   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.171400863s)
	I0927 00:16:38.947898   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.947923   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.948190   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.948207   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:38.948218   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:38.948225   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:38.948480   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:38.948512   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.000059   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:39.000476   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.132046   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:39.490374   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:39.492989   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.801849   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.14111498s)
	I0927 00:16:39.801914   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:39.801915   22923 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.900787405s)
	I0927 00:16:39.801927   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:39.802242   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:39.802285   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.802305   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:39.802318   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:39.802316   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:39.802555   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:39.802569   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:39.802579   22923 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-364775"
	I0927 00:16:39.803411   22923 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:16:39.804344   22923 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:16:39.806163   22923 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:39.806896   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:16:39.807410   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:16:39.807425   22923 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:16:39.870942   22923 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:16:39.870973   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:39.953858   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:16:39.953888   22923 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:16:39.987421   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.990568   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:40.013239   22923 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:40.013265   22923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:16:40.054642   22923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:40.311779   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.487458   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:40.488947   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:40.708018   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.575916247s)
	I0927 00:16:40.708075   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:40.708093   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:40.708329   22923 main.go:141] libmachine: (addons-364775) DBG | Closing plugin on server side
	I0927 00:16:40.708410   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:40.708424   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:40.708437   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:40.708458   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:40.708681   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:40.708717   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:40.812167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.918341   22923 pod_ready.go:103] pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status "Ready":"False"
	I0927 00:16:41.015974   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.018484   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:41.070353   22923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.015656922s)
	I0927 00:16:41.070410   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:41.070421   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:41.070658   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:41.070675   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:41.070686   22923 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:41.070694   22923 main.go:141] libmachine: (addons-364775) Calling .Close
	I0927 00:16:41.070909   22923 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:41.070942   22923 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:41.072773   22923 addons.go:475] Verifying addon gcp-auth=true in "addons-364775"
	I0927 00:16:41.074260   22923 out.go:177] * Verifying gcp-auth addon...
	I0927 00:16:41.077101   22923 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:16:41.089006   22923 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:16:41.089060   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:41.319255   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:41.489602   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.493367   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:41.589417   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:41.824980   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.009117   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.009383   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.097507   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:42.313572   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.412928   22923 pod_ready.go:98] pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.169 HostIPs:[{IP:192.168.39
.169}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:16:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:16:35 +0000 UTC,FinishedAt:2024-09-27 00:16:41 +0000 UTC,ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf Started:0xc0022776f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000a82370} {Name:kube-api-access-c6xps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000a82380}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:16:42.412956   22923 pod_ready.go:82] duration metric: took 3.507186728s for pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace to be "Ready" ...
	E0927 00:16:42.412968   22923 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-szrc9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.169 HostIPs:[{IP:192.168.39.169}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:16:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:16:35 +0000 UTC,FinishedAt:2024-09-27 00:16:41 +0000 UTC,ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://cc2d74218c9b7b20949fa941fc7ad8d676be5e7b5aede59713e2f2c6fc72cedf Started:0xc0022776f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000a82370} {Name:kube-api-access-c6xps MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc000a82380}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:16:42.412977   22923 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.419963   22923 pod_ready.go:93] pod "etcd-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.419981   22923 pod_ready.go:82] duration metric: took 6.997345ms for pod "etcd-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.419989   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.437266   22923 pod_ready.go:93] pod "kube-apiserver-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.437286   22923 pod_ready.go:82] duration metric: took 17.290515ms for pod "kube-apiserver-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.437295   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.456989   22923 pod_ready.go:93] pod "kube-controller-manager-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.457011   22923 pod_ready.go:82] duration metric: took 19.710449ms for pod "kube-controller-manager-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.457022   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vj2cl" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.463096   22923 pod_ready.go:93] pod "kube-proxy-vj2cl" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.463112   22923 pod_ready.go:82] duration metric: took 6.084237ms for pod "kube-proxy-vj2cl" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.463120   22923 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.487973   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.488283   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.581218   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:42.810423   22923 pod_ready.go:93] pod "kube-scheduler-addons-364775" in "kube-system" namespace has status "Ready":"True"
	I0927 00:16:42.810447   22923 pod_ready.go:82] duration metric: took 347.321728ms for pod "kube-scheduler-addons-364775" in "kube-system" namespace to be "Ready" ...
	I0927 00:16:42.810454   22923 pod_ready.go:39] duration metric: took 11.969303463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:16:42.810469   22923 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:16:42.810514   22923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:16:42.814099   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.827884   22923 api_server.go:72] duration metric: took 12.569706035s to wait for apiserver process to appear ...
	I0927 00:16:42.827902   22923 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:16:42.827918   22923 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0927 00:16:42.835431   22923 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I0927 00:16:42.837096   22923 api_server.go:141] control plane version: v1.31.1
	I0927 00:16:42.837111   22923 api_server.go:131] duration metric: took 9.203783ms to wait for apiserver health ...
	I0927 00:16:42.837119   22923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:16:42.988500   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:42.988911   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.087346   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:43.092767   22923 system_pods.go:59] 17 kube-system pods found
	I0927 00:16:43.092791   22923 system_pods.go:61] "coredns-7c65d6cfc9-gd2h2" [4a9f1c5a-89df-497e-a9fa-4a5d427542c0] Running
	I0927 00:16:43.092800   22923 system_pods.go:61] "csi-hostpath-attacher-0" [c4a5feee-cdbf-4a8f-9ab2-d1e28526dc7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:43.092807   22923 system_pods.go:61] "csi-hostpath-resizer-0" [a9b843e4-fb3e-491a-90a1-05337ec1be6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:43.092815   22923 system_pods.go:61] "csi-hostpathplugin-5jvjw" [86b14d99-6d05-417f-834c-06b97d3ff358] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:43.092819   22923 system_pods.go:61] "etcd-addons-364775" [c4a11540-824b-46eb-b5ff-16761d78090b] Running
	I0927 00:16:43.092823   22923 system_pods.go:61] "kube-apiserver-addons-364775" [a34af223-8b21-4d2e-acc8-f35f72a84d89] Running
	I0927 00:16:43.092827   22923 system_pods.go:61] "kube-controller-manager-addons-364775" [d41167fe-9862-4644-a4a2-5891b829c263] Running
	I0927 00:16:43.092833   22923 system_pods.go:61] "kube-ingress-dns-minikube" [8bb056cc-4ad8-48da-bad9-aec78168a573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 00:16:43.092836   22923 system_pods.go:61] "kube-proxy-vj2cl" [f2579736-b094-4822-82ce-2ce53d815d92] Running
	I0927 00:16:43.092840   22923 system_pods.go:61] "kube-scheduler-addons-364775" [87532128-92ea-4e82-8f4b-e05bba39380d] Running
	I0927 00:16:43.092849   22923 system_pods.go:61] "metrics-server-84c5f94fbc-h74zz" [1ee23e82-6d41-48b5-a303-16f6ebd60172] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:43.092855   22923 system_pods.go:61] "nvidia-device-plugin-daemonset-gvjn8" [2de30fac-4d6c-4922-b784-e9801df8f16a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:16:43.092862   22923 system_pods.go:61] "registry-66c9cd494c-kdt5f" [652ee744-ff06-40fe-a66f-aabff5476e31] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:16:43.092867   22923 system_pods.go:61] "registry-proxy-2rlvs" [5080c804-a6a8-4239-bd3f-a89d8f114f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:16:43.092875   22923 system_pods.go:61] "snapshot-controller-56fcc65765-b777z" [beb5ceb2-51fe-49bc-842c-800de73b7628] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.092880   22923 system_pods.go:61] "snapshot-controller-56fcc65765-s5z9r" [ba81ccfa-12e1-42cd-a9f0-d1cbff990eb6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.092888   22923 system_pods.go:61] "storage-provisioner" [b2787e80-d152-46a1-9672-af83ebbb8e9d] Running
	I0927 00:16:43.092895   22923 system_pods.go:74] duration metric: took 255.770173ms to wait for pod list to return data ...
	I0927 00:16:43.092901   22923 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:16:43.209797   22923 default_sa.go:45] found service account: "default"
	I0927 00:16:43.209820   22923 default_sa.go:55] duration metric: took 116.910938ms for default service account to be created ...
	I0927 00:16:43.209828   22923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:16:43.311723   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.415743   22923 system_pods.go:86] 17 kube-system pods found
	I0927 00:16:43.415771   22923 system_pods.go:89] "coredns-7c65d6cfc9-gd2h2" [4a9f1c5a-89df-497e-a9fa-4a5d427542c0] Running
	I0927 00:16:43.415779   22923 system_pods.go:89] "csi-hostpath-attacher-0" [c4a5feee-cdbf-4a8f-9ab2-d1e28526dc7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:43.415785   22923 system_pods.go:89] "csi-hostpath-resizer-0" [a9b843e4-fb3e-491a-90a1-05337ec1be6e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:43.415793   22923 system_pods.go:89] "csi-hostpathplugin-5jvjw" [86b14d99-6d05-417f-834c-06b97d3ff358] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:43.415798   22923 system_pods.go:89] "etcd-addons-364775" [c4a11540-824b-46eb-b5ff-16761d78090b] Running
	I0927 00:16:43.415803   22923 system_pods.go:89] "kube-apiserver-addons-364775" [a34af223-8b21-4d2e-acc8-f35f72a84d89] Running
	I0927 00:16:43.415807   22923 system_pods.go:89] "kube-controller-manager-addons-364775" [d41167fe-9862-4644-a4a2-5891b829c263] Running
	I0927 00:16:43.415813   22923 system_pods.go:89] "kube-ingress-dns-minikube" [8bb056cc-4ad8-48da-bad9-aec78168a573] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 00:16:43.415817   22923 system_pods.go:89] "kube-proxy-vj2cl" [f2579736-b094-4822-82ce-2ce53d815d92] Running
	I0927 00:16:43.415824   22923 system_pods.go:89] "kube-scheduler-addons-364775" [87532128-92ea-4e82-8f4b-e05bba39380d] Running
	I0927 00:16:43.415829   22923 system_pods.go:89] "metrics-server-84c5f94fbc-h74zz" [1ee23e82-6d41-48b5-a303-16f6ebd60172] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:43.415837   22923 system_pods.go:89] "nvidia-device-plugin-daemonset-gvjn8" [2de30fac-4d6c-4922-b784-e9801df8f16a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:16:43.415842   22923 system_pods.go:89] "registry-66c9cd494c-kdt5f" [652ee744-ff06-40fe-a66f-aabff5476e31] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:16:43.415848   22923 system_pods.go:89] "registry-proxy-2rlvs" [5080c804-a6a8-4239-bd3f-a89d8f114f0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:16:43.415853   22923 system_pods.go:89] "snapshot-controller-56fcc65765-b777z" [beb5ceb2-51fe-49bc-842c-800de73b7628] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.415859   22923 system_pods.go:89] "snapshot-controller-56fcc65765-s5z9r" [ba81ccfa-12e1-42cd-a9f0-d1cbff990eb6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:43.415864   22923 system_pods.go:89] "storage-provisioner" [b2787e80-d152-46a1-9672-af83ebbb8e9d] Running
	I0927 00:16:43.415873   22923 system_pods.go:126] duration metric: took 206.040673ms to wait for k8s-apps to be running ...
	I0927 00:16:43.415880   22923 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:16:43.415924   22923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:16:43.430904   22923 system_svc.go:56] duration metric: took 15.015476ms WaitForService to wait for kubelet
	I0927 00:16:43.430932   22923 kubeadm.go:582] duration metric: took 13.172753467s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:16:43.430948   22923 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:16:43.487452   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:43.487493   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.582042   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:43.610676   22923 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:16:43.610701   22923 node_conditions.go:123] node cpu capacity is 2
	I0927 00:16:43.610712   22923 node_conditions.go:105] duration metric: took 179.759493ms to run NodePressure ...
	I0927 00:16:43.610722   22923 start.go:241] waiting for startup goroutines ...
	I0927 00:16:43.812000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.992855   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:43.993405   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.094833   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:44.312025   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.488378   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:44.488875   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.580616   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:44.812847   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.987339   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.987844   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.081111   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:45.311986   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.488838   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:45.494394   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.588405   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:45.812585   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.988224   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:45.989896   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.082148   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:46.311599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.485928   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.488359   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:46.581225   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:46.811437   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.986958   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.988594   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:47.080381   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:47.311967   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.487137   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.487881   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:47.580513   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:47.812233   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.987205   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.988170   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:48.080591   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:48.312071   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.487224   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.488731   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:48.580104   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:48.811251   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.987100   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.987514   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.080480   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:49.311488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.486957   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:49.488676   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.580612   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:49.811224   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.990265   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:49.991510   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.082172   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:50.313347   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.486985   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.488717   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:50.582659   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:50.812000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.988005   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.988994   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:51.081167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:51.312257   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.486854   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.489465   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:51.580795   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:51.812289   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.987066   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.988257   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:52.081108   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:52.312912   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.486985   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.488399   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:52.581755   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:52.814422   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.987549   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.987829   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:53.080678   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:53.314523   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.488331   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.488764   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:53.580817   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:53.812217   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.986729   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.988945   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:54.080778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:54.312205   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.486448   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.487803   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:54.580761   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:54.811520   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.986634   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.988978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:55.080800   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:55.311991   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.490944   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.493634   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:55.580263   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:55.812139   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.987177   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.987367   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:56.081310   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:56.311167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:56.488842   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:56.488988   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:56.581030   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:56.812978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.543832   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:57.543896   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:57.544370   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.544723   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.550190   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:57.550636   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.581484   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:57.811591   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.988174   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.988206   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:58.081874   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:58.312600   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:58.486504   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.487586   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:58.580249   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:58.811581   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:58.986774   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.987922   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.080834   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:59.311658   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.487196   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:59.488229   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.580181   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:59.812375   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.988448   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:59.988687   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.080252   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:00.311409   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:00.487009   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.488155   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:00.581280   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:00.811845   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:00.987325   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.989570   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:01.080515   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:01.311993   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:01.487850   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.489334   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:01.580814   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:01.811806   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:01.986995   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.988430   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:02.080254   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:02.311725   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:02.487667   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:02.488220   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:02.580912   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.090517   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.090639   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:03.091263   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.091653   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.311887   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.487140   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.488145   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:03.581320   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.811596   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.987251   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.989014   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:04.081778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:04.312130   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:04.487412   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.488309   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:04.580589   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:04.811892   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:04.987356   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.987417   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:05.081474   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.311978   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:05.487432   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.487863   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:05.580682   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.812085   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:05.988000   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.988066   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:06.080989   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.311398   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:06.486561   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.488291   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:06.580935   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.813281   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:06.986571   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.988032   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:07.080913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.314207   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:07.486814   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.488906   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:07.580735   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.812650   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:07.986719   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.987173   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:08.081186   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.311716   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:08.486681   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.487853   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:08.580832   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.812363   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:08.986729   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.988493   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:09.081403   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:09.312278   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:09.485989   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.487569   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:09.580021   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:09.810913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:09.987126   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.987866   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:10.080956   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:10.312137   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:10.487288   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.488658   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:10.580334   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:10.811041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:10.987011   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.987681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:11.080105   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:11.311345   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:11.486779   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:11.487979   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:11.581412   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:11.811943   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:11.987698   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:11.988990   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:12.080887   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:12.311909   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:12.489631   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:12.489995   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:12.588488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:12.811700   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:12.987600   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:12.988206   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:13.081015   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:13.311938   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:13.494362   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:13.494760   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:13.580352   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:13.812378   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:13.986892   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:13.988433   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:14.080520   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:14.312162   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:14.489857   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:14.494879   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:14.581191   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:14.811835   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:14.987031   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:14.988412   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:15.080463   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:15.312254   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:15.492564   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:15.492913   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:15.580514   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:15.811411   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:15.986710   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:15.988183   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:16.082151   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:16.311207   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:16.488013   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:16.488851   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:16.580681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:16.811685   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:16.987749   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:16.988504   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:17.080470   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:17.311695   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:17.486783   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:17.487109   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:17:17.581377   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:17.811534   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:17.986726   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:17.987427   22923 kapi.go:107] duration metric: took 40.003435933s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:17:18.081888   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:18.312758   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:18.487322   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:18.581069   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:18.811131   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:18.987552   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:19.081741   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:19.312438   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:19.486923   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:19.580490   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:19.811952   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:19.987035   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:20.081683   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:20.311815   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:20.487115   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:20.580786   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:20.812516   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:20.986767   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:21.081624   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:21.499313   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:21.500317   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:21.580769   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:21.812245   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:21.988673   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:22.080678   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:22.312325   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:22.486578   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:22.582419   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:22.811470   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:22.986785   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:23.080233   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:23.311183   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:23.486602   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:23.580948   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:23.812622   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:23.987481   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:24.081064   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:24.310966   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:24.486849   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:24.580734   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:24.811250   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:24.986458   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:25.083062   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:25.312905   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:25.488190   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:25.586419   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:25.812210   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:25.987787   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:26.081106   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:26.310603   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:26.503116   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:26.580733   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:26.812493   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:26.987376   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:27.080712   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.312863   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:27.486929   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:27.581037   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.811603   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:27.987405   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:28.080637   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.311085   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:28.486056   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:28.580113   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.811368   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:28.986515   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:29.081058   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.311442   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:29.486947   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:29.580754   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.811655   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:29.987571   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:30.080977   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.312032   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:30.486723   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:30.581611   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.811778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:30.987653   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:31.084236   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.311594   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:31.486542   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:31.581512   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.826040   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:31.987096   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:32.080580   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.312000   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:32.487673   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:32.581375   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.812041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:32.988980   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:33.090694   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.312326   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:33.488231   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:33.580777   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.811345   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:33.986236   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:34.081390   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.312086   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:34.487244   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:34.581175   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.813913   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:34.991040   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:35.090876   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.313501   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:35.486433   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:35.583246   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.811699   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:35.987680   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:36.080748   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.328503   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:36.488009   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:36.581253   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.810998   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:36.987755   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:37.080636   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.311688   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:37.486973   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:37.580599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.812272   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:37.986591   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:38.081184   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.311337   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:38.487175   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:38.581016   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.813136   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:38.987107   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:39.080496   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.312041   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:39.486941   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:39.587727   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.811898   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:39.988300   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:40.081007   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.312655   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:40.486841   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:40.583017   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.814862   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:40.991378   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:41.084949   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.312488   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:41.486705   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:41.583208   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.812185   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:41.987474   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:42.081648   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.320540   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:42.487828   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:42.588281   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.811937   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:42.987008   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:43.081062   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.312344   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:43.489462   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:43.580778   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.812433   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:43.987514   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:44.087429   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.315287   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:44.487711   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:44.580200   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.811873   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.000196   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:45.080558   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.314997   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.492610   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:45.581681   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.815128   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:45.987137   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:46.080783   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.312557   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:46.487720   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:46.583038   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.812051   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:46.986544   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:47.081350   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.311599   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:47.487110   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:47.580700   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.812997   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:47.986922   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:48.080420   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.311397   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:48.486365   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:48.581127   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.815408   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:48.987143   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:49.080998   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.312595   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:49.486745   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:49.581175   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.812100   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:49.986765   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:50.080703   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.312173   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:50.487469   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:50.580789   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.813167   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.004072   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:51.082921   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.315081   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.486907   22923 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:51.582951   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.812667   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:51.986763   22923 kapi.go:107] duration metric: took 1m14.004357399s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:17:52.081726   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.312108   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:52.581247   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.811383   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:53.081164   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.311077   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:53.580614   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.811860   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:54.085731   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.311903   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:54.581015   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.812698   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:55.080114   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.312140   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:55.580929   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.812076   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:56.080795   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.315916   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:56.580324   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.813652   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:57.081490   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.318121   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:57.580543   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.813190   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:58.081274   22923 kapi.go:107] duration metric: took 1m17.004168732s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:17:58.083013   22923 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-364775 cluster.
	I0927 00:17:58.084321   22923 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:17:58.085650   22923 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:17:58.311273   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:58.813554   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:59.314920   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:59.811122   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:18:00.312742   22923 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:18:00.813283   22923 kapi.go:107] duration metric: took 1m21.006383462s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:18:00.814917   22923 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0927 00:18:00.816192   22923 addons.go:510] duration metric: took 1m30.557986461s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher storage-provisioner ingress-dns nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0927 00:18:00.816230   22923 start.go:246] waiting for cluster config update ...
	I0927 00:18:00.816255   22923 start.go:255] writing updated cluster config ...
	I0927 00:18:00.816798   22923 ssh_runner.go:195] Run: rm -f paused
	I0927 00:18:00.876391   22923 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:18:00.878075   22923 out.go:177] * Done! kubectl is now configured to use "addons-364775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.299074530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397084299048243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b57fa913-e054-4a3a-bf76-b95fabc72679 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.299553146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=673815dd-57cd-4c2c-88b2-4ba677324e5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.299600202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=673815dd-57cd-4c2c-88b2-4ba677324e5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.299854103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9758e9a4411fe087bc8831762671c4f6b47d76e38e4273fca5dd22b8a7456278,PodSandboxId:648120743e719c8b7d3a098c00d3960cf85955cdc24c522fa67cade5840d070a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727396955996124844,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-x9hv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86a23b4f-e160-433b-b168-d9458fb8b1de,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727396201136847565,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727396195
134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1
fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=673815dd-57cd-4c2c-88b2-4ba677324e5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.336046855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87c2bdb9-9492-42a1-a993-3a26c9f34c36 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.336136215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87c2bdb9-9492-42a1-a993-3a26c9f34c36 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.337723898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2b0fc01-0af8-4714-85c9-5694195b6223 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.338832673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397084338804270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2b0fc01-0af8-4714-85c9-5694195b6223 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.339402779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6afa31ed-0f87-408c-8d1c-8ab4ef402ea4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.339462808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6afa31ed-0f87-408c-8d1c-8ab4ef402ea4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.339725358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9758e9a4411fe087bc8831762671c4f6b47d76e38e4273fca5dd22b8a7456278,PodSandboxId:648120743e719c8b7d3a098c00d3960cf85955cdc24c522fa67cade5840d070a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727396955996124844,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-x9hv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86a23b4f-e160-433b-b168-d9458fb8b1de,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727396201136847565,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727396195
134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1
fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6afa31ed-0f87-408c-8d1c-8ab4ef402ea4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.377493915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6ca29ba-af2f-4c18-8f17-0e25a50fb6bc name=/runtime.v1.RuntimeService/Version
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.377581255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6ca29ba-af2f-4c18-8f17-0e25a50fb6bc name=/runtime.v1.RuntimeService/Version
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.378684372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f85d31f-ddcb-4c03-aae3-ac1e05e1c706 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.380009775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397084379982728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f85d31f-ddcb-4c03-aae3-ac1e05e1c706 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.380442789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e1c4a63-ef8c-4996-818b-8ff7a603021b name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.380500293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e1c4a63-ef8c-4996-818b-8ff7a603021b name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.381597260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9758e9a4411fe087bc8831762671c4f6b47d76e38e4273fca5dd22b8a7456278,PodSandboxId:648120743e719c8b7d3a098c00d3960cf85955cdc24c522fa67cade5840d070a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727396955996124844,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-x9hv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86a23b4f-e160-433b-b168-d9458fb8b1de,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727396201136847565,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727396195
134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1
fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e1c4a63-ef8c-4996-818b-8ff7a603021b name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.424786803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7fe1bef0-9a51-4418-8eb5-d62605d9db50 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.424879753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fe1bef0-9a51-4418-8eb5-d62605d9db50 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.426093691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95a70fb1-2b91-4c47-ac1e-27a019555777 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.427252637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397084427224085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95a70fb1-2b91-4c47-ac1e-27a019555777 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.428013022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83ca2950-7590-45b3-9af3-d6f92230e246 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.428065976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83ca2950-7590-45b3-9af3-d6f92230e246 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:31:24 addons-364775 crio[667]: time="2024-09-27 00:31:24.428292572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9758e9a4411fe087bc8831762671c4f6b47d76e38e4273fca5dd22b8a7456278,PodSandboxId:648120743e719c8b7d3a098c00d3960cf85955cdc24c522fa67cade5840d070a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727396955996124844,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-x9hv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86a23b4f-e160-433b-b168-d9458fb8b1de,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34468cf471df6b4d1719cac0509d0ac2e68794dbbb2e0bd0454bed19262aac76,PodSandboxId:d1dd36f55b9f4df75602b762e9d7c54990b8b804646bcd7232366294a7a8a44d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727396817750350677,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79a9bd72-f93d-4276-b274-754e05f94f32,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe,PodSandboxId:3f91389aebb948a4455c2f88073d3e783525caebdf4a263e7236841b5bb1afd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727396277275483601,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-xndcj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 8f6a3c0b-7425-4b56-b74c-882bc39a365a,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31,PodSandboxId:e55373ee380963dbf7c0993260242c8962e6b10c6ce9d89e167afcab86ae1828,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727396201136847565,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h74zz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee23e82-6d41-48b5-a303-16f6ebd60172,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e,PodSandboxId:c88fbf538e03933e6e355ca88933702b9d752071bbf75429d386ee325a9ded3b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727396197101745329,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2787e80-d152-46a1-9672-af83ebbb8e9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2,PodSandboxId:9c525627d0e811d0f823065b6bbe1f17c4cfb5fbc4689f3775ccb5749a360d32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727396195
134163553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gd2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9f1c5a-89df-497e-a9fa-4a5d427542c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586,PodSandboxId:24f13f826689a603fa3389d546afc6e1932efac63d260ce80320c7c00e451ff7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727396190965652931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj2cl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2579736-b094-4822-82ce-2ce53d815d92,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754,PodSandboxId:27e25445505608eef7b597a702838b106cc52f7032b8da7078df79fcaa090c65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727396180016263956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189875bacab913074c40f02258ce917c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8,PodSandboxId:4602faee6ddead3caef7fcd709a94705f68ab971149a7cb0ff5949d9d9af4260,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727396179965876700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6f174888739dcf82da53be270fcf0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54,PodSandboxId:6bb1edfce2faf865f3ed5b681c2fcb8082f56cd827dd4c23ed98c03d31ab4dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1
fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727396179941558858,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8210072b33b53cf82c21ea71cd377f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee,PodSandboxId:81dc5c65d7d85ee3fc141806c54fe9d5547728bad51f9d951bd05c464b6ee1f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727396179935671742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-364775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c3dce61f3e473bca9c62fbb58b9036,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83ca2950-7590-45b3-9af3-d6f92230e246 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9758e9a4411fe       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   648120743e719       hello-world-app-55bf9c44b4-x9hv6
	34468cf471df6       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago       Running             nginx                     0                   d1dd36f55b9f4       nginx
	44f5c0760c47e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago      Running             gcp-auth                  0                   3f91389aebb94       gcp-auth-89d5ffd79-xndcj
	77e2cbcfd0c9c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   e55373ee38096       metrics-server-84c5f94fbc-h74zz
	2392c10311ecb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago      Running             storage-provisioner       0                   c88fbf538e039       storage-provisioner
	eb092a183ee87       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago      Running             coredns                   0                   9c525627d0e81       coredns-7c65d6cfc9-gd2h2
	fa7e6a02565d0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        14 minutes ago      Running             kube-proxy                0                   24f13f826689a       kube-proxy-vj2cl
	ee201c0719a52       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   27e2544550560       etcd-addons-364775
	941f64fde84f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   4602faee6ddea       kube-apiserver-addons-364775
	7d21d052488b3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   6bb1edfce2faf       kube-scheduler-addons-364775
	02d48ea4cc0d3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   81dc5c65d7d85       kube-controller-manager-addons-364775
	
	
	==> coredns [eb092a183ee879a4948c4ef6efe4289548da1f2948fe91a1b2ef6ac8db5a62a2] <==
	[INFO] 127.0.0.1:50766 - 8775 "HINFO IN 3569014972345960485.1862048380583480753. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014022704s
	[INFO] 10.244.0.7:39054 - 16199 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000318748s
	[INFO] 10.244.0.7:39054 - 31015 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000093499s
	[INFO] 10.244.0.7:39054 - 24769 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000150069s
	[INFO] 10.244.0.7:39054 - 3407 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000172928s
	[INFO] 10.244.0.7:39054 - 53162 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097552s
	[INFO] 10.244.0.7:39054 - 32704 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00006962s
	[INFO] 10.244.0.7:39054 - 46163 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114352s
	[INFO] 10.244.0.7:39054 - 45726 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000079808s
	[INFO] 10.244.0.7:55575 - 58922 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122896s
	[INFO] 10.244.0.7:55575 - 58635 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000056553s
	[INFO] 10.244.0.7:34701 - 2635 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052467s
	[INFO] 10.244.0.7:34701 - 2443 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088571s
	[INFO] 10.244.0.7:53770 - 29791 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083808s
	[INFO] 10.244.0.7:53770 - 29618 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043278s
	[INFO] 10.244.0.7:51278 - 32481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061908s
	[INFO] 10.244.0.7:51278 - 32630 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010053s
	[INFO] 10.244.0.21:39399 - 32421 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000626795s
	[INFO] 10.244.0.21:51047 - 35722 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000173759s
	[INFO] 10.244.0.21:59883 - 41503 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105903s
	[INFO] 10.244.0.21:43597 - 17694 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000060022s
	[INFO] 10.244.0.21:58239 - 38522 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106047s
	[INFO] 10.244.0.21:38772 - 6309 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000376339s
	[INFO] 10.244.0.21:41727 - 3859 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001416366s
	[INFO] 10.244.0.21:49529 - 27922 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001747962s
	
	
	==> describe nodes <==
	Name:               addons-364775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-364775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-364775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_16_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-364775
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:16:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-364775
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:31:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:29:31 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:29:31 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:29:31 +0000   Fri, 27 Sep 2024 00:16:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:29:31 +0000   Fri, 27 Sep 2024 00:16:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    addons-364775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c20e89c92c64839b60418c495bf40ff
	  System UUID:                9c20e89c-92c6-4839-b604-18c495bf40ff
	  Boot ID:                    de047c3a-8269-46a9-afd9-1cfad2a2ee3d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-x9hv6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  gcp-auth                    gcp-auth-89d5ffd79-xndcj                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-gd2h2                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-364775                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-364775             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-364775    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-vj2cl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-364775             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-h74zz          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         14m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-364775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-364775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-364775 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-364775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-364775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-364775 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m                kubelet          Node addons-364775 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node addons-364775 event: Registered Node addons-364775 in Controller
	
	
	==> dmesg <==
	[  +5.471676] kauditd_printk_skb: 137 callbacks suppressed
	[ +11.036796] kauditd_printk_skb: 79 callbacks suppressed
	[Sep27 00:17] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.888391] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.910967] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.507302] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.437195] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.152093] kauditd_printk_skb: 43 callbacks suppressed
	[ +10.173097] kauditd_printk_skb: 6 callbacks suppressed
	[Sep27 00:18] kauditd_printk_skb: 55 callbacks suppressed
	[Sep27 00:19] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:20] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:23] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:26] kauditd_printk_skb: 28 callbacks suppressed
	[ +10.244894] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.025310] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.494292] kauditd_printk_skb: 7 callbacks suppressed
	[ +24.636506] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.016348] kauditd_printk_skb: 38 callbacks suppressed
	[Sep27 00:27] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.266598] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.065698] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.072950] kauditd_printk_skb: 25 callbacks suppressed
	[ +27.283174] kauditd_printk_skb: 4 callbacks suppressed
	[Sep27 00:29] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [ee201c0719a52c59263614ccb1b06b1ed92df1c3e374d2bec21766eef5129754] <==
	{"level":"warn","ts":"2024-09-27T00:26:14.632926Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:14.256037Z","time spent":"376.885356ms","remote":"127.0.0.1:41978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1138,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-27T00:26:14.633120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.527313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-09-27T00:26:14.633138Z","caller":"traceutil/trace.go:171","msg":"trace[1605129029] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1984; }","duration":"307.54545ms","start":"2024-09-27T00:26:14.325586Z","end":"2024-09-27T00:26:14.633132Z","steps":["trace[1605129029] 'range keys from in-memory index tree'  (duration: 307.476662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:14.633154Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:14.325554Z","time spent":"307.597008ms","remote":"127.0.0.1:42020","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":207,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
	{"level":"warn","ts":"2024-09-27T00:26:14.633233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.753441ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:14.633249Z","caller":"traceutil/trace.go:171","msg":"trace[617859409] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1984; }","duration":"272.780609ms","start":"2024-09-27T00:26:14.360462Z","end":"2024-09-27T00:26:14.633243Z","steps":["trace[617859409] 'range keys from in-memory index tree'  (duration: 272.748633ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:14.633315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.17705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-27T00:26:14.633328Z","caller":"traceutil/trace.go:171","msg":"trace[942278298] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1984; }","duration":"236.191523ms","start":"2024-09-27T00:26:14.397131Z","end":"2024-09-27T00:26:14.633323Z","steps":["trace[942278298] 'count revisions from in-memory index tree'  (duration: 236.13798ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:26:20.180521Z","caller":"traceutil/trace.go:171","msg":"trace[1946292725] linearizableReadLoop","detail":"{readStateIndex:2155; appliedIndex:2154; }","duration":"169.940839ms","start":"2024-09-27T00:26:20.010565Z","end":"2024-09-27T00:26:20.180506Z","steps":["trace[1946292725] 'read index received'  (duration: 168.170478ms)","trace[1946292725] 'applied index is now lower than readState.Index'  (duration: 1.769835ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:26:20.180632Z","caller":"traceutil/trace.go:171","msg":"trace[119175638] transaction","detail":"{read_only:false; response_revision:2010; number_of_response:1; }","duration":"185.041203ms","start":"2024-09-27T00:26:19.995581Z","end":"2024-09-27T00:26:20.180622Z","steps":["trace[119175638] 'process raft request'  (duration: 183.199927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:20.180763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.179973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-27T00:26:20.180783Z","caller":"traceutil/trace.go:171","msg":"trace[929737590] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:2010; }","duration":"170.214606ms","start":"2024-09-27T00:26:20.010561Z","end":"2024-09-27T00:26:20.180775Z","steps":["trace[929737590] 'agreement among raft nodes before linearized reading'  (duration: 170.14061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:20.180846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.335773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:20.180885Z","caller":"traceutil/trace.go:171","msg":"trace[1760975757] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2010; }","duration":"102.380651ms","start":"2024-09-27T00:26:20.078497Z","end":"2024-09-27T00:26:20.180878Z","steps":["trace[1760975757] 'agreement among raft nodes before linearized reading'  (duration: 102.322144ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:26:20.844201Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1536}
	{"level":"info","ts":"2024-09-27T00:26:20.885577Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1536,"took":"40.935931ms","hash":3628088381,"current-db-size-bytes":6135808,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3530752,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-27T00:26:20.885633Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3628088381,"revision":1536,"compact-revision":-1}
	{"level":"info","ts":"2024-09-27T00:26:47.157301Z","caller":"traceutil/trace.go:171","msg":"trace[683330143] linearizableReadLoop","detail":"{readStateIndex:2316; appliedIndex:2315; }","duration":"248.104512ms","start":"2024-09-27T00:26:46.909171Z","end":"2024-09-27T00:26:47.157276Z","steps":["trace[683330143] 'read index received'  (duration: 247.914744ms)","trace[683330143] 'applied index is now lower than readState.Index'  (duration: 188.919µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:26:47.157488Z","caller":"traceutil/trace.go:171","msg":"trace[1122576871] transaction","detail":"{read_only:false; response_revision:2162; number_of_response:1; }","duration":"349.484715ms","start":"2024-09-27T00:26:46.807988Z","end":"2024-09-27T00:26:47.157473Z","steps":["trace[1122576871] 'process raft request'  (duration: 349.152553ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:26:47.158481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:26:46.807932Z","time spent":"350.369978ms","remote":"127.0.0.1:41978","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2157 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-27T00:26:47.157668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.429269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:26:47.158706Z","caller":"traceutil/trace.go:171","msg":"trace[1301308464] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2162; }","duration":"249.522193ms","start":"2024-09-27T00:26:46.909168Z","end":"2024-09-27T00:26:47.158690Z","steps":["trace[1301308464] 'agreement among raft nodes before linearized reading'  (duration: 248.407046ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:31:20.851766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2012}
	{"level":"info","ts":"2024-09-27T00:31:20.872424Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2012,"took":"20.036267ms","hash":2694970108,"current-db-size-bytes":6266880,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4870144,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-27T00:31:20.872511Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2694970108,"revision":2012,"compact-revision":1536}
	
	
	==> gcp-auth [44f5c0760c47e0ae8b4f8bae5ad90bd953ca8d8938486256754d700af225e8fe] <==
	2024/09/27 00:18:01 Ready to write response ...
	2024/09/27 00:18:01 Ready to marshal response ...
	2024/09/27 00:18:01 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:04 Ready to marshal response ...
	2024/09/27 00:26:04 Ready to write response ...
	2024/09/27 00:26:14 Ready to marshal response ...
	2024/09/27 00:26:14 Ready to write response ...
	2024/09/27 00:26:14 Ready to marshal response ...
	2024/09/27 00:26:14 Ready to write response ...
	2024/09/27 00:26:49 Ready to marshal response ...
	2024/09/27 00:26:49 Ready to write response ...
	2024/09/27 00:26:54 Ready to marshal response ...
	2024/09/27 00:26:54 Ready to write response ...
	2024/09/27 00:27:06 Ready to marshal response ...
	2024/09/27 00:27:06 Ready to write response ...
	2024/09/27 00:27:06 Ready to marshal response ...
	2024/09/27 00:27:06 Ready to write response ...
	2024/09/27 00:27:18 Ready to marshal response ...
	2024/09/27 00:27:18 Ready to write response ...
	2024/09/27 00:29:12 Ready to marshal response ...
	2024/09/27 00:29:12 Ready to write response ...
	
	
	==> kernel <==
	 00:31:24 up 15 min,  0 users,  load average: 0.50, 0.55, 0.47
	Linux addons-364775 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [941f64fde84f05119ee38d1a5464cd871c06b706b54fa1fe284535e8214009c8] <==
	E0927 00:17:45.543687       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	E0927 00:17:45.547748       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	E0927 00:17:45.559080       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.124.183:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.124.183:443: connect: connection refused" logger="UnhandledError"
	I0927 00:17:45.702853       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0927 00:26:04.624102       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.136.26"}
	I0927 00:26:29.141449       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 00:26:33.135917       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0927 00:26:34.161474       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0927 00:26:54.854249       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0927 00:26:55.039695       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.233.173"}
	I0927 00:27:06.182818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.185454       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.203612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.203649       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.226306       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.226388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.238166       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.238291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:27:06.268140       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:27:06.268284       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:27:07.236715       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:27:07.269274       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:27:07.372522       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0927 00:27:34.394838       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0927 00:29:13.133808       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.51.24"}
	
	
	==> kube-controller-manager [02d48ea4cc0d31074e83240e2912b935fa3a7e4030e676e56a97fdf651652bee] <==
	W0927 00:29:22.666732       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:29:22.666782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:29:25.082177       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0927 00:29:30.123832       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:29:30.123897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:29:31.610270       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-364775"
	W0927 00:29:55.284432       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:29:55.284506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:30:02.431798       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:30:02.431864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:30:04.752934       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:30:04.753123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:30:12.298557       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:30:12.298709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:30:26.832664       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:30:26.832832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:30:35.512322       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:30:35.512379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:30:53.565873       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:30:53.565991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:31:11.154482       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:31:11.154660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:31:20.241330       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:31:20.241484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:31:23.395743       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="11.543µs"
	
	
	==> kube-proxy [fa7e6a02565d07c2042b8e4832d33799151a9b767813a6f56f5ad935f6f92586] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:16:31.768151       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:16:31.776690       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.169"]
	E0927 00:16:31.776745       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:16:31.867724       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:16:31.867754       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:16:31.867779       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:16:31.872020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:16:31.872322       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:16:31.872352       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:16:31.876064       1 config.go:328] "Starting node config controller"
	I0927 00:16:31.876094       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:16:31.876473       1 config.go:199] "Starting service config controller"
	I0927 00:16:31.876483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:16:31.876500       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:16:31.876504       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:16:31.977065       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:16:31.977110       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:16:31.977424       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d21d052488b358d50d3915ffdf2b08eee589a26c15c59d3f1480ede3811db54] <==
	W0927 00:16:22.386330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:16:22.386360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.386430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:16:22.386640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.386867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:16:22.388785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:22.389761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:16:22.394000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.238556       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:16:23.238927       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:16:23.244304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:16:23.244370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.281738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:16:23.282013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.416794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:16:23.417002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.467991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:16:23.468110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.603228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:16:23.603279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.603337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:16:23.603364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:16:23.619906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:16:23.619937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:16:26.272381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:30:47 addons-364775 kubelet[1215]: E0927 00:30:47.823185    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7b7dbf55-2e42-4482-a77e-05baf4945f79"
	Sep 27 00:30:55 addons-364775 kubelet[1215]: E0927 00:30:55.147205    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397055146804243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:30:55 addons-364775 kubelet[1215]: E0927 00:30:55.147236    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397055146804243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:31:00 addons-364775 kubelet[1215]: E0927 00:31:00.825896    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7b7dbf55-2e42-4482-a77e-05baf4945f79"
	Sep 27 00:31:05 addons-364775 kubelet[1215]: E0927 00:31:05.150163    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397065149722562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:31:05 addons-364775 kubelet[1215]: E0927 00:31:05.150204    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397065149722562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:31:13 addons-364775 kubelet[1215]: E0927 00:31:13.822539    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7b7dbf55-2e42-4482-a77e-05baf4945f79"
	Sep 27 00:31:15 addons-364775 kubelet[1215]: E0927 00:31:15.152645    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397075152276231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:31:15 addons-364775 kubelet[1215]: E0927 00:31:15.153137    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397075152276231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563692,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:31:23 addons-364775 kubelet[1215]: I0927 00:31:23.421396    1215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-x9hv6" podStartSLOduration=128.978430315 podStartE2EDuration="2m11.421342298s" podCreationTimestamp="2024-09-27 00:29:12 +0000 UTC" firstStartedPulling="2024-09-27 00:29:13.540782189 +0000 UTC m=+768.901422406" lastFinishedPulling="2024-09-27 00:29:15.983694172 +0000 UTC m=+771.344334389" observedRunningTime="2024-09-27 00:29:16.388172475 +0000 UTC m=+771.748812711" watchObservedRunningTime="2024-09-27 00:31:23.421342298 +0000 UTC m=+898.781982534"
	Sep 27 00:31:24 addons-364775 kubelet[1215]: E0927 00:31:24.835989    1215 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:31:24 addons-364775 kubelet[1215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:31:24 addons-364775 kubelet[1215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:31:24 addons-364775 kubelet[1215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:31:24 addons-364775 kubelet[1215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.850477    1215 scope.go:117] "RemoveContainer" containerID="77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31"
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.863190    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtp6h\" (UniqueName: \"kubernetes.io/projected/1ee23e82-6d41-48b5-a303-16f6ebd60172-kube-api-access-dtp6h\") pod \"1ee23e82-6d41-48b5-a303-16f6ebd60172\" (UID: \"1ee23e82-6d41-48b5-a303-16f6ebd60172\") "
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.863220    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1ee23e82-6d41-48b5-a303-16f6ebd60172-tmp-dir\") pod \"1ee23e82-6d41-48b5-a303-16f6ebd60172\" (UID: \"1ee23e82-6d41-48b5-a303-16f6ebd60172\") "
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.863561    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee23e82-6d41-48b5-a303-16f6ebd60172-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "1ee23e82-6d41-48b5-a303-16f6ebd60172" (UID: "1ee23e82-6d41-48b5-a303-16f6ebd60172"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.873825    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee23e82-6d41-48b5-a303-16f6ebd60172-kube-api-access-dtp6h" (OuterVolumeSpecName: "kube-api-access-dtp6h") pod "1ee23e82-6d41-48b5-a303-16f6ebd60172" (UID: "1ee23e82-6d41-48b5-a303-16f6ebd60172"). InnerVolumeSpecName "kube-api-access-dtp6h". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.874737    1215 scope.go:117] "RemoveContainer" containerID="77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31"
	Sep 27 00:31:24 addons-364775 kubelet[1215]: E0927 00:31:24.875479    1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31\": container with ID starting with 77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31 not found: ID does not exist" containerID="77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31"
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.875531    1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31"} err="failed to get container status \"77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31\": rpc error: code = NotFound desc = could not find container \"77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31\": container with ID starting with 77e2cbcfd0c9c671e3819d532fbc1eb140f08a91746f385066cfa7816bb23f31 not found: ID does not exist"
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.964356    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dtp6h\" (UniqueName: \"kubernetes.io/projected/1ee23e82-6d41-48b5-a303-16f6ebd60172-kube-api-access-dtp6h\") on node \"addons-364775\" DevicePath \"\""
	Sep 27 00:31:24 addons-364775 kubelet[1215]: I0927 00:31:24.964409    1215 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1ee23e82-6d41-48b5-a303-16f6ebd60172-tmp-dir\") on node \"addons-364775\" DevicePath \"\""
	
	
	==> storage-provisioner [2392c10311ecba4ad854e936976dfeca45567492e61de8604f1324981400707e] <==
	I0927 00:16:37.916328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:16:38.076551       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:16:38.076614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:16:38.159162       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:16:38.159377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-364775_daea0619-9535-4149-a165-9a8f7ab27789!
	I0927 00:16:38.160542       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88a6a7b1-44d1-4b8a-9c87-da3ce2ecdc13", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-364775_daea0619-9535-4149-a165-9a8f7ab27789 became leader
	I0927 00:16:38.760305       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-364775_daea0619-9535-4149-a165-9a8f7ab27789!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-364775 -n addons-364775
helpers_test.go:261: (dbg) Run:  kubectl --context addons-364775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-364775 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-364775 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-364775/192.168.39.169
	Start Time:       Fri, 27 Sep 2024 00:18:01 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxclv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wxclv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-364775
	  Normal   Pulling    11m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m20s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (321.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 node stop m02 -v=7 --alsologtostderr
E0927 00:40:51.462668   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:32.424565   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-631834 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.45454403s)

                                                
                                                
-- stdout --
	* Stopping node "ha-631834-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:40:36.145377   38078 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:40:36.145521   38078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:40:36.145530   38078 out.go:358] Setting ErrFile to fd 2...
	I0927 00:40:36.145537   38078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:40:36.145735   38078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:40:36.145988   38078 mustload.go:65] Loading cluster: ha-631834
	I0927 00:40:36.146378   38078 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:40:36.146395   38078 stop.go:39] StopHost: ha-631834-m02
	I0927 00:40:36.146749   38078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:40:36.146796   38078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:40:36.162731   38078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
	I0927 00:40:36.163217   38078 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:40:36.163764   38078 main.go:141] libmachine: Using API Version  1
	I0927 00:40:36.163787   38078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:40:36.164074   38078 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:40:36.166562   38078 out.go:177] * Stopping node "ha-631834-m02"  ...
	I0927 00:40:36.167834   38078 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 00:40:36.167871   38078 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:40:36.168103   38078 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 00:40:36.168132   38078 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:40:36.171211   38078 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:40:36.171658   38078 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:40:36.171680   38078 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:40:36.171869   38078 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:40:36.172037   38078 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:40:36.172191   38078 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:40:36.172298   38078 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:40:36.256808   38078 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 00:40:36.311588   38078 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 00:40:36.365603   38078 main.go:141] libmachine: Stopping "ha-631834-m02"...
	I0927 00:40:36.365631   38078 main.go:141] libmachine: (ha-631834-m02) Calling .GetState
	I0927 00:40:36.367158   38078 main.go:141] libmachine: (ha-631834-m02) Calling .Stop
	I0927 00:40:36.370317   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 0/120
	I0927 00:40:37.371545   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 1/120
	I0927 00:40:38.373731   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 2/120
	I0927 00:40:39.375215   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 3/120
	I0927 00:40:40.376549   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 4/120
	I0927 00:40:41.378370   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 5/120
	I0927 00:40:42.379591   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 6/120
	I0927 00:40:43.381021   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 7/120
	I0927 00:40:44.382240   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 8/120
	I0927 00:40:45.383446   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 9/120
	I0927 00:40:46.385521   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 10/120
	I0927 00:40:47.387403   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 11/120
	I0927 00:40:48.388649   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 12/120
	I0927 00:40:49.389868   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 13/120
	I0927 00:40:50.391431   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 14/120
	I0927 00:40:51.393398   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 15/120
	I0927 00:40:52.395008   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 16/120
	I0927 00:40:53.396327   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 17/120
	I0927 00:40:54.397665   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 18/120
	I0927 00:40:55.399085   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 19/120
	I0927 00:40:56.400933   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 20/120
	I0927 00:40:57.402276   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 21/120
	I0927 00:40:58.403485   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 22/120
	I0927 00:40:59.405647   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 23/120
	I0927 00:41:00.407346   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 24/120
	I0927 00:41:01.409187   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 25/120
	I0927 00:41:02.410453   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 26/120
	I0927 00:41:03.411641   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 27/120
	I0927 00:41:04.413148   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 28/120
	I0927 00:41:05.414586   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 29/120
	I0927 00:41:06.416425   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 30/120
	I0927 00:41:07.418129   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 31/120
	I0927 00:41:08.419399   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 32/120
	I0927 00:41:09.420902   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 33/120
	I0927 00:41:10.422357   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 34/120
	I0927 00:41:11.423841   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 35/120
	I0927 00:41:12.425731   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 36/120
	I0927 00:41:13.426951   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 37/120
	I0927 00:41:14.428710   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 38/120
	I0927 00:41:15.430013   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 39/120
	I0927 00:41:16.431916   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 40/120
	I0927 00:41:17.433797   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 41/120
	I0927 00:41:18.434982   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 42/120
	I0927 00:41:19.436213   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 43/120
	I0927 00:41:20.437583   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 44/120
	I0927 00:41:21.439233   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 45/120
	I0927 00:41:22.441366   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 46/120
	I0927 00:41:23.442811   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 47/120
	I0927 00:41:24.444175   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 48/120
	I0927 00:41:25.445410   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 49/120
	I0927 00:41:26.447315   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 50/120
	I0927 00:41:27.448556   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 51/120
	I0927 00:41:28.449846   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 52/120
	I0927 00:41:29.451054   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 53/120
	I0927 00:41:30.452311   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 54/120
	I0927 00:41:31.454005   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 55/120
	I0927 00:41:32.456122   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 56/120
	I0927 00:41:33.457718   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 57/120
	I0927 00:41:34.459006   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 58/120
	I0927 00:41:35.460197   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 59/120
	I0927 00:41:36.462113   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 60/120
	I0927 00:41:37.464290   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 61/120
	I0927 00:41:38.466031   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 62/120
	I0927 00:41:39.467294   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 63/120
	I0927 00:41:40.468860   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 64/120
	I0927 00:41:41.470773   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 65/120
	I0927 00:41:42.472149   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 66/120
	I0927 00:41:43.474350   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 67/120
	I0927 00:41:44.475847   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 68/120
	I0927 00:41:45.477076   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 69/120
	I0927 00:41:46.479150   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 70/120
	I0927 00:41:47.480483   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 71/120
	I0927 00:41:48.481852   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 72/120
	I0927 00:41:49.483206   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 73/120
	I0927 00:41:50.484553   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 74/120
	I0927 00:41:51.486407   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 75/120
	I0927 00:41:52.487718   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 76/120
	I0927 00:41:53.489078   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 77/120
	I0927 00:41:54.490385   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 78/120
	I0927 00:41:55.492534   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 79/120
	I0927 00:41:56.494550   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 80/120
	I0927 00:41:57.495995   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 81/120
	I0927 00:41:58.497225   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 82/120
	I0927 00:41:59.498830   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 83/120
	I0927 00:42:00.500032   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 84/120
	I0927 00:42:01.501805   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 85/120
	I0927 00:42:02.502978   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 86/120
	I0927 00:42:03.504334   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 87/120
	I0927 00:42:04.505499   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 88/120
	I0927 00:42:05.506719   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 89/120
	I0927 00:42:06.508932   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 90/120
	I0927 00:42:07.510316   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 91/120
	I0927 00:42:08.511765   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 92/120
	I0927 00:42:09.512992   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 93/120
	I0927 00:42:10.514800   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 94/120
	I0927 00:42:11.516480   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 95/120
	I0927 00:42:12.518436   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 96/120
	I0927 00:42:13.519813   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 97/120
	I0927 00:42:14.521749   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 98/120
	I0927 00:42:15.522894   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 99/120
	I0927 00:42:16.524909   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 100/120
	I0927 00:42:17.526244   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 101/120
	I0927 00:42:18.527561   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 102/120
	I0927 00:42:19.529651   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 103/120
	I0927 00:42:20.530978   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 104/120
	I0927 00:42:21.533116   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 105/120
	I0927 00:42:22.534357   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 106/120
	I0927 00:42:23.536649   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 107/120
	I0927 00:42:24.539029   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 108/120
	I0927 00:42:25.540659   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 109/120
	I0927 00:42:26.542493   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 110/120
	I0927 00:42:27.544029   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 111/120
	I0927 00:42:28.545697   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 112/120
	I0927 00:42:29.546997   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 113/120
	I0927 00:42:30.549396   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 114/120
	I0927 00:42:31.551243   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 115/120
	I0927 00:42:32.552630   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 116/120
	I0927 00:42:33.553830   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 117/120
	I0927 00:42:34.554851   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 118/120
	I0927 00:42:35.556105   38078 main.go:141] libmachine: (ha-631834-m02) Waiting for machine to stop 119/120
	I0927 00:42:36.556778   38078 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 00:42:36.556906   38078 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-631834 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr
E0927 00:42:54.346161   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr: (18.714327318s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-631834 -n ha-631834
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 logs -n 25: (1.388566282s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834:/home/docker/cp-test_ha-631834-m03_ha-631834.txt                      |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834 sudo cat                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834.txt                                |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m04 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp testdata/cp-test.txt                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834:/home/docker/cp-test_ha-631834-m04_ha-631834.txt                      |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834 sudo cat                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834.txt                                |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03:/home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m03 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-631834 node stop m02 -v=7                                                    | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:36:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:36:00.733270   34022 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:36:00.733561   34022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:00.733572   34022 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:00.733578   34022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:00.733765   34022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:36:00.734369   34022 out.go:352] Setting JSON to false
	I0927 00:36:00.735232   34022 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4706,"bootTime":1727392655,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:36:00.735334   34022 start.go:139] virtualization: kvm guest
	I0927 00:36:00.737562   34022 out.go:177] * [ha-631834] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:36:00.738940   34022 notify.go:220] Checking for updates...
	I0927 00:36:00.738971   34022 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:36:00.740322   34022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:36:00.741556   34022 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:00.742777   34022 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:00.744101   34022 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:36:00.745418   34022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:36:00.746900   34022 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:36:00.781665   34022 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 00:36:00.782952   34022 start.go:297] selected driver: kvm2
	I0927 00:36:00.782969   34022 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:36:00.782989   34022 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:36:00.784037   34022 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:36:00.784159   34022 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:36:00.799229   34022 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:36:00.799294   34022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:36:00.799639   34022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:36:00.799677   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:00.799725   34022 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0927 00:36:00.799740   34022 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:36:00.799811   34022 start.go:340] cluster config:
	{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0927 00:36:00.799933   34022 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:36:00.801666   34022 out.go:177] * Starting "ha-631834" primary control-plane node in "ha-631834" cluster
	I0927 00:36:00.802817   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:00.802860   34022 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:36:00.802872   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:36:00.802951   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:36:00.802964   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:36:00.803416   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:00.803442   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json: {Name:mk6367ac20858a15eb53ac7fa5c4186f9176d965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:00.803588   34022 start.go:360] acquireMachinesLock for ha-631834: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:36:00.803621   34022 start.go:364] duration metric: took 19.585µs to acquireMachinesLock for "ha-631834"
	I0927 00:36:00.803641   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:00.803696   34022 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 00:36:00.805235   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:36:00.805379   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:00.805413   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:00.819286   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I0927 00:36:00.819786   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:00.820338   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:00.820363   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:00.820724   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:00.820928   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:00.821048   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:00.821188   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:36:00.821209   34022 client.go:168] LocalClient.Create starting
	I0927 00:36:00.821241   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:36:00.821269   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:00.821289   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:00.821354   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:36:00.821378   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:00.821391   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:00.821430   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:36:00.821441   34022 main.go:141] libmachine: (ha-631834) Calling .PreCreateCheck
	I0927 00:36:00.821748   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:00.822055   34022 main.go:141] libmachine: Creating machine...
	I0927 00:36:00.822066   34022 main.go:141] libmachine: (ha-631834) Calling .Create
	I0927 00:36:00.822200   34022 main.go:141] libmachine: (ha-631834) Creating KVM machine...
	I0927 00:36:00.823422   34022 main.go:141] libmachine: (ha-631834) DBG | found existing default KVM network
	I0927 00:36:00.824110   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:00.823958   34045 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000122e20}
	I0927 00:36:00.824171   34022 main.go:141] libmachine: (ha-631834) DBG | created network xml: 
	I0927 00:36:00.824189   34022 main.go:141] libmachine: (ha-631834) DBG | <network>
	I0927 00:36:00.824198   34022 main.go:141] libmachine: (ha-631834) DBG |   <name>mk-ha-631834</name>
	I0927 00:36:00.824206   34022 main.go:141] libmachine: (ha-631834) DBG |   <dns enable='no'/>
	I0927 00:36:00.824216   34022 main.go:141] libmachine: (ha-631834) DBG |   
	I0927 00:36:00.824223   34022 main.go:141] libmachine: (ha-631834) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 00:36:00.824229   34022 main.go:141] libmachine: (ha-631834) DBG |     <dhcp>
	I0927 00:36:00.824234   34022 main.go:141] libmachine: (ha-631834) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 00:36:00.824245   34022 main.go:141] libmachine: (ha-631834) DBG |     </dhcp>
	I0927 00:36:00.824249   34022 main.go:141] libmachine: (ha-631834) DBG |   </ip>
	I0927 00:36:00.824253   34022 main.go:141] libmachine: (ha-631834) DBG |   
	I0927 00:36:00.824262   34022 main.go:141] libmachine: (ha-631834) DBG | </network>
	I0927 00:36:00.824270   34022 main.go:141] libmachine: (ha-631834) DBG | 
	I0927 00:36:00.829058   34022 main.go:141] libmachine: (ha-631834) DBG | trying to create private KVM network mk-ha-631834 192.168.39.0/24...
	I0927 00:36:00.893473   34022 main.go:141] libmachine: (ha-631834) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 ...
	I0927 00:36:00.893502   34022 main.go:141] libmachine: (ha-631834) DBG | private KVM network mk-ha-631834 192.168.39.0/24 created
	I0927 00:36:00.893514   34022 main.go:141] libmachine: (ha-631834) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:36:00.893569   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:00.893424   34045 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:00.893608   34022 main.go:141] libmachine: (ha-631834) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:36:01.131795   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.131690   34045 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa...
	I0927 00:36:01.270727   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.270595   34045 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/ha-631834.rawdisk...
	I0927 00:36:01.270761   34022 main.go:141] libmachine: (ha-631834) DBG | Writing magic tar header
	I0927 00:36:01.270787   34022 main.go:141] libmachine: (ha-631834) DBG | Writing SSH key tar header
	I0927 00:36:01.270801   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.270770   34045 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 ...
	I0927 00:36:01.270904   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834
	I0927 00:36:01.270938   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 (perms=drwx------)
	I0927 00:36:01.270949   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:36:01.270966   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:01.270976   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:36:01.270986   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:36:01.270995   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:36:01.271007   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home
	I0927 00:36:01.271032   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:36:01.271042   34022 main.go:141] libmachine: (ha-631834) DBG | Skipping /home - not owner
	I0927 00:36:01.271059   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:36:01.271072   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:36:01.271090   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:36:01.271101   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:36:01.271119   34022 main.go:141] libmachine: (ha-631834) Creating domain...
	I0927 00:36:01.272173   34022 main.go:141] libmachine: (ha-631834) define libvirt domain using xml: 
	I0927 00:36:01.272191   34022 main.go:141] libmachine: (ha-631834) <domain type='kvm'>
	I0927 00:36:01.272198   34022 main.go:141] libmachine: (ha-631834)   <name>ha-631834</name>
	I0927 00:36:01.272206   34022 main.go:141] libmachine: (ha-631834)   <memory unit='MiB'>2200</memory>
	I0927 00:36:01.272211   34022 main.go:141] libmachine: (ha-631834)   <vcpu>2</vcpu>
	I0927 00:36:01.272217   34022 main.go:141] libmachine: (ha-631834)   <features>
	I0927 00:36:01.272224   34022 main.go:141] libmachine: (ha-631834)     <acpi/>
	I0927 00:36:01.272235   34022 main.go:141] libmachine: (ha-631834)     <apic/>
	I0927 00:36:01.272246   34022 main.go:141] libmachine: (ha-631834)     <pae/>
	I0927 00:36:01.272256   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272263   34022 main.go:141] libmachine: (ha-631834)   </features>
	I0927 00:36:01.272282   34022 main.go:141] libmachine: (ha-631834)   <cpu mode='host-passthrough'>
	I0927 00:36:01.272289   34022 main.go:141] libmachine: (ha-631834)   
	I0927 00:36:01.272293   34022 main.go:141] libmachine: (ha-631834)   </cpu>
	I0927 00:36:01.272297   34022 main.go:141] libmachine: (ha-631834)   <os>
	I0927 00:36:01.272301   34022 main.go:141] libmachine: (ha-631834)     <type>hvm</type>
	I0927 00:36:01.272307   34022 main.go:141] libmachine: (ha-631834)     <boot dev='cdrom'/>
	I0927 00:36:01.272319   34022 main.go:141] libmachine: (ha-631834)     <boot dev='hd'/>
	I0927 00:36:01.272332   34022 main.go:141] libmachine: (ha-631834)     <bootmenu enable='no'/>
	I0927 00:36:01.272343   34022 main.go:141] libmachine: (ha-631834)   </os>
	I0927 00:36:01.272353   34022 main.go:141] libmachine: (ha-631834)   <devices>
	I0927 00:36:01.272363   34022 main.go:141] libmachine: (ha-631834)     <disk type='file' device='cdrom'>
	I0927 00:36:01.272378   34022 main.go:141] libmachine: (ha-631834)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/boot2docker.iso'/>
	I0927 00:36:01.272388   34022 main.go:141] libmachine: (ha-631834)       <target dev='hdc' bus='scsi'/>
	I0927 00:36:01.272453   34022 main.go:141] libmachine: (ha-631834)       <readonly/>
	I0927 00:36:01.272477   34022 main.go:141] libmachine: (ha-631834)     </disk>
	I0927 00:36:01.272488   34022 main.go:141] libmachine: (ha-631834)     <disk type='file' device='disk'>
	I0927 00:36:01.272497   34022 main.go:141] libmachine: (ha-631834)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:36:01.272515   34022 main.go:141] libmachine: (ha-631834)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/ha-631834.rawdisk'/>
	I0927 00:36:01.272530   34022 main.go:141] libmachine: (ha-631834)       <target dev='hda' bus='virtio'/>
	I0927 00:36:01.272545   34022 main.go:141] libmachine: (ha-631834)     </disk>
	I0927 00:36:01.272560   34022 main.go:141] libmachine: (ha-631834)     <interface type='network'>
	I0927 00:36:01.272569   34022 main.go:141] libmachine: (ha-631834)       <source network='mk-ha-631834'/>
	I0927 00:36:01.272578   34022 main.go:141] libmachine: (ha-631834)       <model type='virtio'/>
	I0927 00:36:01.272589   34022 main.go:141] libmachine: (ha-631834)     </interface>
	I0927 00:36:01.272599   34022 main.go:141] libmachine: (ha-631834)     <interface type='network'>
	I0927 00:36:01.272607   34022 main.go:141] libmachine: (ha-631834)       <source network='default'/>
	I0927 00:36:01.272617   34022 main.go:141] libmachine: (ha-631834)       <model type='virtio'/>
	I0927 00:36:01.272638   34022 main.go:141] libmachine: (ha-631834)     </interface>
	I0927 00:36:01.272657   34022 main.go:141] libmachine: (ha-631834)     <serial type='pty'>
	I0927 00:36:01.272670   34022 main.go:141] libmachine: (ha-631834)       <target port='0'/>
	I0927 00:36:01.272680   34022 main.go:141] libmachine: (ha-631834)     </serial>
	I0927 00:36:01.272689   34022 main.go:141] libmachine: (ha-631834)     <console type='pty'>
	I0927 00:36:01.272711   34022 main.go:141] libmachine: (ha-631834)       <target type='serial' port='0'/>
	I0927 00:36:01.272724   34022 main.go:141] libmachine: (ha-631834)     </console>
	I0927 00:36:01.272736   34022 main.go:141] libmachine: (ha-631834)     <rng model='virtio'>
	I0927 00:36:01.272748   34022 main.go:141] libmachine: (ha-631834)       <backend model='random'>/dev/random</backend>
	I0927 00:36:01.272758   34022 main.go:141] libmachine: (ha-631834)     </rng>
	I0927 00:36:01.272767   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272773   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272784   34022 main.go:141] libmachine: (ha-631834)   </devices>
	I0927 00:36:01.272793   34022 main.go:141] libmachine: (ha-631834) </domain>
	I0927 00:36:01.272813   34022 main.go:141] libmachine: (ha-631834) 
	I0927 00:36:01.276563   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:8c:cf:67 in network default
	I0927 00:36:01.277046   34022 main.go:141] libmachine: (ha-631834) Ensuring networks are active...
	I0927 00:36:01.277065   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:01.277664   34022 main.go:141] libmachine: (ha-631834) Ensuring network default is active
	I0927 00:36:01.277924   34022 main.go:141] libmachine: (ha-631834) Ensuring network mk-ha-631834 is active
	I0927 00:36:01.278421   34022 main.go:141] libmachine: (ha-631834) Getting domain xml...
	I0927 00:36:01.279045   34022 main.go:141] libmachine: (ha-631834) Creating domain...
	I0927 00:36:02.458607   34022 main.go:141] libmachine: (ha-631834) Waiting to get IP...
	I0927 00:36:02.459345   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.459714   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.459736   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.459698   34045 retry.go:31] will retry after 212.922851ms: waiting for machine to come up
	I0927 00:36:02.674121   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.674559   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.674578   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.674520   34045 retry.go:31] will retry after 258.802525ms: waiting for machine to come up
	I0927 00:36:02.934927   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.935352   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.935388   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.935333   34045 retry.go:31] will retry after 385.263435ms: waiting for machine to come up
	I0927 00:36:03.321940   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:03.322382   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:03.322457   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:03.322352   34045 retry.go:31] will retry after 458.033114ms: waiting for machine to come up
	I0927 00:36:03.782012   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:03.782379   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:03.782406   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:03.782329   34045 retry.go:31] will retry after 619.891619ms: waiting for machine to come up
	I0927 00:36:04.404184   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:04.404742   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:04.404769   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:04.404698   34045 retry.go:31] will retry after 668.661978ms: waiting for machine to come up
	I0927 00:36:05.074541   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:05.074956   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:05.074981   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:05.074931   34045 retry.go:31] will retry after 1.139973505s: waiting for machine to come up
	I0927 00:36:06.216868   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:06.217267   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:06.217283   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:06.217233   34045 retry.go:31] will retry after 1.161217409s: waiting for machine to come up
	I0927 00:36:07.380453   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:07.380855   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:07.380881   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:07.380831   34045 retry.go:31] will retry after 1.625874527s: waiting for machine to come up
	I0927 00:36:09.008452   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:09.008818   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:09.008846   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:09.008771   34045 retry.go:31] will retry after 1.776898319s: waiting for machine to come up
	I0927 00:36:10.787443   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:10.787818   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:10.787869   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:10.787802   34045 retry.go:31] will retry after 2.764791752s: waiting for machine to come up
	I0927 00:36:13.556224   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:13.556671   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:13.556691   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:13.556636   34045 retry.go:31] will retry after 2.903263764s: waiting for machine to come up
	I0927 00:36:16.461156   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:16.461600   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:16.461623   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:16.461567   34045 retry.go:31] will retry after 4.074333009s: waiting for machine to come up
	I0927 00:36:20.540756   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.541254   34022 main.go:141] libmachine: (ha-631834) Found IP for machine: 192.168.39.4
	I0927 00:36:20.541349   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has current primary IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.541373   34022 main.go:141] libmachine: (ha-631834) Reserving static IP address...
	I0927 00:36:20.541632   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find host DHCP lease matching {name: "ha-631834", mac: "52:54:00:bc:09:a5", ip: "192.168.39.4"} in network mk-ha-631834
	I0927 00:36:20.614776   34022 main.go:141] libmachine: (ha-631834) DBG | Getting to WaitForSSH function...
	I0927 00:36:20.614808   34022 main.go:141] libmachine: (ha-631834) Reserved static IP address: 192.168.39.4
	I0927 00:36:20.614821   34022 main.go:141] libmachine: (ha-631834) Waiting for SSH to be available...
	I0927 00:36:20.617249   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.617621   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.617669   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.617792   34022 main.go:141] libmachine: (ha-631834) DBG | Using SSH client type: external
	I0927 00:36:20.617816   34022 main.go:141] libmachine: (ha-631834) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa (-rw-------)
	I0927 00:36:20.617844   34022 main.go:141] libmachine: (ha-631834) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:36:20.617868   34022 main.go:141] libmachine: (ha-631834) DBG | About to run SSH command:
	I0927 00:36:20.617881   34022 main.go:141] libmachine: (ha-631834) DBG | exit 0
	I0927 00:36:20.747285   34022 main.go:141] libmachine: (ha-631834) DBG | SSH cmd err, output: <nil>: 
	I0927 00:36:20.747567   34022 main.go:141] libmachine: (ha-631834) KVM machine creation complete!
	I0927 00:36:20.747871   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:20.748388   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:20.748565   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:20.748693   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:36:20.748716   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:20.749749   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:36:20.749770   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:36:20.749777   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:36:20.749785   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.751512   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.751780   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.751802   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.751906   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.752078   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.752231   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.752323   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.752604   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.752800   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.752812   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:36:20.862622   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:36:20.862650   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:36:20.862657   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.865244   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.865552   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.865577   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.865716   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.865945   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.866143   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.866275   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.866412   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.866570   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.866579   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:36:20.980090   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:36:20.980221   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:36:20.980236   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:36:20.980246   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:20.980486   34022 buildroot.go:166] provisioning hostname "ha-631834"
	I0927 00:36:20.980510   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:20.980686   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.982900   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.983180   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.983205   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.983320   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.983483   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.983596   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.983828   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.983972   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.984135   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.984146   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834 && echo "ha-631834" | sudo tee /etc/hostname
	I0927 00:36:21.110505   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834
	
	I0927 00:36:21.110541   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.113154   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.113483   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.113507   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.113696   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.113890   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.114053   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.114223   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.114372   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:21.114529   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:21.114543   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:36:21.236395   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:36:21.236427   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:36:21.236467   34022 buildroot.go:174] setting up certificates
	I0927 00:36:21.236480   34022 provision.go:84] configureAuth start
	I0927 00:36:21.236491   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:21.236728   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:21.239154   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.239450   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.239489   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.239661   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.241898   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.242200   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.242217   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.242388   34022 provision.go:143] copyHostCerts
	I0927 00:36:21.242413   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:36:21.242453   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:36:21.242464   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:36:21.242539   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:36:21.242644   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:36:21.242668   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:36:21.242676   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:36:21.242718   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:36:21.242794   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:36:21.242826   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:36:21.242835   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:36:21.242869   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:36:21.242951   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834 san=[127.0.0.1 192.168.39.4 ha-631834 localhost minikube]
	I0927 00:36:21.481677   34022 provision.go:177] copyRemoteCerts
	I0927 00:36:21.481751   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:36:21.481779   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.484532   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.484907   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.484938   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.485150   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.485340   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.485466   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.485603   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:21.574275   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:36:21.574368   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:36:21.598740   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:36:21.598797   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0927 00:36:21.622342   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:36:21.622427   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 00:36:21.646827   34022 provision.go:87] duration metric: took 410.33255ms to configureAuth
	I0927 00:36:21.646853   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:36:21.647098   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:21.647240   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.650164   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.650494   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.650526   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.650702   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.650908   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.651062   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.651244   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.651427   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:21.651615   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:21.651635   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:36:21.880863   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:36:21.880887   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:36:21.880895   34022 main.go:141] libmachine: (ha-631834) Calling .GetURL
	I0927 00:36:21.882096   34022 main.go:141] libmachine: (ha-631834) DBG | Using libvirt version 6000000
	I0927 00:36:21.884523   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.884856   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.884898   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.885077   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:36:21.885091   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:36:21.885098   34022 client.go:171] duration metric: took 21.063880971s to LocalClient.Create
	I0927 00:36:21.885116   34022 start.go:167] duration metric: took 21.063936629s to libmachine.API.Create "ha-631834"
	I0927 00:36:21.885126   34022 start.go:293] postStartSetup for "ha-631834" (driver="kvm2")
	I0927 00:36:21.885144   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:36:21.885159   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:21.885420   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:36:21.885488   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.887537   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.887790   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.887814   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.887928   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.888084   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.888274   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.888404   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:21.975055   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:36:21.979759   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:36:21.979784   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:36:21.979851   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:36:21.979941   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:36:21.979953   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:36:21.980080   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:36:21.990531   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:36:22.014932   34022 start.go:296] duration metric: took 129.791559ms for postStartSetup
	I0927 00:36:22.015008   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:22.015658   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:22.018265   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.018611   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.018639   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.018899   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:22.019096   34022 start.go:128] duration metric: took 21.215390892s to createHost
	I0927 00:36:22.019120   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.021302   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.021602   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.021623   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.021782   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.021953   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.022148   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.022286   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.022416   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:22.022581   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:22.022591   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:36:22.136170   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397382.093993681
	
	I0927 00:36:22.136192   34022 fix.go:216] guest clock: 1727397382.093993681
	I0927 00:36:22.136202   34022 fix.go:229] Guest: 2024-09-27 00:36:22.093993681 +0000 UTC Remote: 2024-09-27 00:36:22.019107365 +0000 UTC m=+21.319607179 (delta=74.886316ms)
	I0927 00:36:22.136269   34022 fix.go:200] guest clock delta is within tolerance: 74.886316ms
	I0927 00:36:22.136280   34022 start.go:83] releasing machines lock for "ha-631834", held for 21.332646091s
	I0927 00:36:22.136304   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.136563   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:22.139383   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.139736   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.139759   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.139946   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140424   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140576   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140640   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:36:22.140680   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.140773   34022 ssh_runner.go:195] Run: cat /version.json
	I0927 00:36:22.140798   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.143090   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143433   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.143461   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143480   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143586   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.143765   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.143827   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.143847   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143916   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.143997   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.144069   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:22.144133   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.144262   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.144408   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:22.243060   34022 ssh_runner.go:195] Run: systemctl --version
	I0927 00:36:22.259700   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:36:22.415956   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:36:22.422185   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:36:22.422251   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:36:22.438630   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:36:22.438655   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:36:22.438724   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:36:22.456456   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:36:22.471488   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:36:22.471543   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:36:22.486032   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:36:22.500571   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:36:22.621816   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:36:22.772846   34022 docker.go:233] disabling docker service ...
	I0927 00:36:22.772913   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:36:22.787944   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:36:22.801143   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:36:22.939572   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:36:23.057695   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:36:23.072091   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:36:23.090934   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:36:23.090997   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.101768   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:36:23.101839   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.112607   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.122981   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.133563   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:36:23.144443   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.155241   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.172932   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.184071   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:36:23.194018   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:36:23.194075   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:36:23.207498   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:36:23.216852   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:36:23.351326   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:36:23.449204   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:36:23.449280   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:36:23.454200   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:36:23.454262   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:36:23.458028   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:36:23.497638   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:36:23.497711   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:36:23.525615   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:36:23.555870   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:36:23.557109   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:23.559689   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:23.559978   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:23.560009   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:23.560187   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:36:23.564687   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:36:23.577852   34022 kubeadm.go:883] updating cluster {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:36:23.577958   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:23.578011   34022 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:36:23.610284   34022 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 00:36:23.610361   34022 ssh_runner.go:195] Run: which lz4
	I0927 00:36:23.614339   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0927 00:36:23.614430   34022 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 00:36:23.618714   34022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 00:36:23.618740   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 00:36:24.972066   34022 crio.go:462] duration metric: took 1.357668477s to copy over tarball
	I0927 00:36:24.972137   34022 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 00:36:26.952440   34022 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.98028123s)
	I0927 00:36:26.952467   34022 crio.go:469] duration metric: took 1.9803713s to extract the tarball
	I0927 00:36:26.952477   34022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 00:36:26.990046   34022 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:36:27.038137   34022 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:36:27.038171   34022 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:36:27.038180   34022 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.1 crio true true} ...
	I0927 00:36:27.038337   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:36:27.038423   34022 ssh_runner.go:195] Run: crio config
	I0927 00:36:27.087406   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:27.087427   34022 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 00:36:27.087436   34022 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:36:27.087455   34022 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-631834 NodeName:ha-631834 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:36:27.087584   34022 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-631834"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:36:27.087605   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:36:27.087640   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:36:27.104338   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:36:27.104430   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:36:27.104475   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:36:27.114532   34022 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:36:27.114597   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 00:36:27.125576   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0927 00:36:27.143174   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:36:27.159783   34022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0927 00:36:27.177110   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0927 00:36:27.193945   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:36:27.197827   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:36:27.210366   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:36:27.336946   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:36:27.354991   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.4
	I0927 00:36:27.355012   34022 certs.go:194] generating shared ca certs ...
	I0927 00:36:27.355030   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.355205   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:36:27.355254   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:36:27.355267   34022 certs.go:256] generating profile certs ...
	I0927 00:36:27.355348   34022 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:36:27.355370   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt with IP's: []
	I0927 00:36:27.682062   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt ...
	I0927 00:36:27.682092   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt: {Name:mk8f3bba10f88a791b79bb763eef9fe3f7d34390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.682274   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key ...
	I0927 00:36:27.682289   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key: {Name:mk503d08fe6b48c31ea153960f6273dc934010ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.682389   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6
	I0927 00:36:27.682409   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.254]
	I0927 00:36:27.752883   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 ...
	I0927 00:36:27.752911   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6: {Name:mka090c8b2557cb246619f729c0272d8e73ab4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.753091   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6 ...
	I0927 00:36:27.753107   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6: {Name:mk32c435c509e1da50a9d54c9a27e1ed3da8b7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.753219   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:36:27.753364   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:36:27.753446   34022 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:36:27.753465   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt with IP's: []
	I0927 00:36:27.888870   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt ...
	I0927 00:36:27.888902   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt: {Name:mk428f3282cdd0b71edcb5a948cacf34b7f69074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.889093   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key ...
	I0927 00:36:27.889107   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key: {Name:mk092e7e928ba5ffe819bbe344c977ddad72812f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.889205   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:36:27.889223   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:36:27.889233   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:36:27.889246   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:36:27.889256   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:36:27.889266   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:36:27.889278   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:36:27.889288   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:36:27.889339   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:36:27.889372   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:36:27.889381   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:36:27.889401   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:36:27.889423   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:36:27.889452   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:36:27.889488   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:36:27.889514   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:36:27.889528   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:27.889540   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:36:27.890073   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:36:27.915212   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:36:27.938433   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:36:27.961704   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:36:27.985172   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 00:36:28.008248   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:36:28.031157   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:36:28.053875   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:36:28.077746   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:36:28.100790   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:36:28.126305   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:36:28.148839   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:36:28.165086   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:36:28.171319   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:36:28.183230   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.187750   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.187803   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.193649   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:36:28.204802   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:36:28.215518   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.219871   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.219914   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.225559   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:36:28.236534   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:36:28.247541   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.251956   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.252002   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.257569   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:36:28.268557   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:36:28.272624   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:36:28.272681   34022 kubeadm.go:392] StartCluster: {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:36:28.272765   34022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:36:28.272803   34022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:36:28.310788   34022 cri.go:89] found id: ""
	I0927 00:36:28.310863   34022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:36:28.321240   34022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:36:28.331038   34022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:36:28.340878   34022 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:36:28.340897   34022 kubeadm.go:157] found existing configuration files:
	
	I0927 00:36:28.340934   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:36:28.350170   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:36:28.350236   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:36:28.359911   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:36:28.369100   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:36:28.369152   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:36:28.378846   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:36:28.388020   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:36:28.388070   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:36:28.397520   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:36:28.406575   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:36:28.406618   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:36:28.415973   34022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 00:36:28.517602   34022 kubeadm.go:310] W0927 00:36:28.474729     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:36:28.518499   34022 kubeadm.go:310] W0927 00:36:28.475845     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:36:28.620411   34022 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:36:39.196766   34022 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:36:39.196817   34022 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:36:39.196897   34022 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:36:39.197042   34022 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:36:39.197146   34022 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:36:39.197242   34022 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:36:39.198695   34022 out.go:235]   - Generating certificates and keys ...
	I0927 00:36:39.198783   34022 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:36:39.198874   34022 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:36:39.198967   34022 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:36:39.199046   34022 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:36:39.199135   34022 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:36:39.199205   34022 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:36:39.199287   34022 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:36:39.199453   34022 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-631834 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0927 00:36:39.199543   34022 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:36:39.199699   34022 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-631834 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0927 00:36:39.199796   34022 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:36:39.199890   34022 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:36:39.199953   34022 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:36:39.200035   34022 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:36:39.200121   34022 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:36:39.200212   34022 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:36:39.200291   34022 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:36:39.200372   34022 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:36:39.200439   34022 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:36:39.200531   34022 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:36:39.200632   34022 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:36:39.202948   34022 out.go:235]   - Booting up control plane ...
	I0927 00:36:39.203043   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:36:39.203122   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:36:39.203192   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:36:39.203290   34022 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:36:39.203381   34022 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:36:39.203419   34022 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:36:39.203571   34022 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:36:39.203689   34022 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:36:39.203745   34022 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.136312ms
	I0927 00:36:39.203833   34022 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:36:39.203916   34022 kubeadm.go:310] [api-check] The API server is healthy after 5.885001913s
	I0927 00:36:39.204050   34022 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:36:39.204208   34022 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:36:39.204298   34022 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:36:39.204479   34022 kubeadm.go:310] [mark-control-plane] Marking the node ha-631834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:36:39.204542   34022 kubeadm.go:310] [bootstrap-token] Using token: a2inhk.us1mqrkt01ocu6ik
	I0927 00:36:39.205835   34022 out.go:235]   - Configuring RBAC rules ...
	I0927 00:36:39.205939   34022 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:36:39.206027   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:36:39.206203   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:36:39.206359   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:36:39.206513   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:36:39.206623   34022 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:36:39.206783   34022 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:36:39.206841   34022 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:36:39.206903   34022 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:36:39.206913   34022 kubeadm.go:310] 
	I0927 00:36:39.206990   34022 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:36:39.207004   34022 kubeadm.go:310] 
	I0927 00:36:39.207128   34022 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:36:39.207138   34022 kubeadm.go:310] 
	I0927 00:36:39.207188   34022 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:36:39.207263   34022 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:36:39.207324   34022 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:36:39.207333   34022 kubeadm.go:310] 
	I0927 00:36:39.207377   34022 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:36:39.207383   34022 kubeadm.go:310] 
	I0927 00:36:39.207423   34022 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:36:39.207429   34022 kubeadm.go:310] 
	I0927 00:36:39.207471   34022 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:36:39.207543   34022 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:36:39.207603   34022 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:36:39.207611   34022 kubeadm.go:310] 
	I0927 00:36:39.207679   34022 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:36:39.207747   34022 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:36:39.207752   34022 kubeadm.go:310] 
	I0927 00:36:39.207858   34022 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a2inhk.us1mqrkt01ocu6ik \
	I0927 00:36:39.207978   34022 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 00:36:39.208009   34022 kubeadm.go:310] 	--control-plane 
	I0927 00:36:39.208024   34022 kubeadm.go:310] 
	I0927 00:36:39.208133   34022 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:36:39.208140   34022 kubeadm.go:310] 
	I0927 00:36:39.208217   34022 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a2inhk.us1mqrkt01ocu6ik \
	I0927 00:36:39.208329   34022 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 00:36:39.208342   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:39.208348   34022 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 00:36:39.209742   34022 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 00:36:39.210824   34022 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 00:36:39.216482   34022 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 00:36:39.216498   34022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 00:36:39.238534   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 00:36:39.596628   34022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:36:39.596683   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:39.596724   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834 minikube.k8s.io/updated_at=2024_09_27T00_36_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=true
	I0927 00:36:39.626142   34022 ops.go:34] apiserver oom_adj: -16
	I0927 00:36:39.790024   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:40.291013   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:40.790408   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:41.290433   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:41.790624   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:42.290399   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:42.790081   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:43.290106   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:43.383411   34022 kubeadm.go:1113] duration metric: took 3.786772854s to wait for elevateKubeSystemPrivileges
	I0927 00:36:43.383449   34022 kubeadm.go:394] duration metric: took 15.110773171s to StartCluster
	I0927 00:36:43.383466   34022 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:43.383525   34022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:43.384159   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:43.384353   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:36:43.384357   34022 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:43.384379   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:36:43.384387   34022 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 00:36:43.384482   34022 addons.go:69] Setting storage-provisioner=true in profile "ha-631834"
	I0927 00:36:43.384503   34022 addons.go:234] Setting addon storage-provisioner=true in "ha-631834"
	I0927 00:36:43.384502   34022 addons.go:69] Setting default-storageclass=true in profile "ha-631834"
	I0927 00:36:43.384521   34022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-631834"
	I0927 00:36:43.384535   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:36:43.384567   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:43.384839   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.384866   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.384944   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.384960   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.399817   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0927 00:36:43.399897   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0927 00:36:43.400293   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.400363   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.400865   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.400886   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.401031   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.401063   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.401250   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.401432   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.401539   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.402075   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.402108   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.403551   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:43.403892   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 00:36:43.404454   34022 cert_rotation.go:140] Starting client certificate rotation controller
	I0927 00:36:43.404728   34022 addons.go:234] Setting addon default-storageclass=true in "ha-631834"
	I0927 00:36:43.404772   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:36:43.405147   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.405179   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.417112   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0927 00:36:43.417520   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.418127   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.418155   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.418477   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.418681   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.419924   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0927 00:36:43.420288   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.420380   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:43.420672   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.420688   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.420969   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.421504   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.421551   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.422256   34022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:36:43.423360   34022 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:36:43.423375   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:36:43.423389   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:43.426316   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.426764   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:43.426778   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.426969   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:43.427109   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:43.427219   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:43.427355   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:43.435962   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0927 00:36:43.436362   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.436730   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.436746   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.437076   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.437260   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.438594   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:43.438749   34022 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:36:43.438763   34022 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:36:43.438784   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:43.441264   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.441750   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:43.441794   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:43.441824   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.441923   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:43.442101   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:43.442225   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:43.549239   34022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:36:43.572279   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:36:43.662399   34022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:36:44.397951   34022 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 00:36:44.398036   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398060   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398143   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398170   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398344   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398359   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398368   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398374   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398388   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398402   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398409   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398416   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398649   34022 main.go:141] libmachine: (ha-631834) DBG | Closing plugin on server side
	I0927 00:36:44.398666   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398675   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398678   34022 main.go:141] libmachine: (ha-631834) DBG | Closing plugin on server side
	I0927 00:36:44.398694   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398708   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398760   34022 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 00:36:44.398784   34022 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 00:36:44.398889   34022 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0927 00:36:44.398901   34022 round_trippers.go:469] Request Headers:
	I0927 00:36:44.398911   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:36:44.398920   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:36:44.417589   34022 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0927 00:36:44.418067   34022 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0927 00:36:44.418079   34022 round_trippers.go:469] Request Headers:
	I0927 00:36:44.418087   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:36:44.418091   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:36:44.418095   34022 round_trippers.go:473]     Content-Type: application/json
	I0927 00:36:44.420490   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:36:44.420636   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.420647   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.420904   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.420921   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.422479   34022 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 00:36:44.423550   34022 addons.go:510] duration metric: took 1.039159873s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 00:36:44.423595   34022 start.go:246] waiting for cluster config update ...
	I0927 00:36:44.423613   34022 start.go:255] writing updated cluster config ...
	I0927 00:36:44.425272   34022 out.go:201] 
	I0927 00:36:44.426803   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:44.426894   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:44.428362   34022 out.go:177] * Starting "ha-631834-m02" control-plane node in "ha-631834" cluster
	I0927 00:36:44.429446   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:44.429473   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:36:44.429577   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:36:44.429598   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:36:44.429705   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:44.429910   34022 start.go:360] acquireMachinesLock for ha-631834-m02: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:36:44.429964   34022 start.go:364] duration metric: took 31.862µs to acquireMachinesLock for "ha-631834-m02"
	I0927 00:36:44.429988   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:44.430077   34022 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0927 00:36:44.431533   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:36:44.431627   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:44.431667   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:44.446949   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0927 00:36:44.447487   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:44.447999   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:44.448029   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:44.448325   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:44.448539   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:36:44.448658   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:36:44.448816   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:36:44.448842   34022 client.go:168] LocalClient.Create starting
	I0927 00:36:44.448876   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:36:44.448913   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:44.448937   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:44.449007   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:36:44.449034   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:44.449049   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:44.449076   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:36:44.449088   34022 main.go:141] libmachine: (ha-631834-m02) Calling .PreCreateCheck
	I0927 00:36:44.449246   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:36:44.449638   34022 main.go:141] libmachine: Creating machine...
	I0927 00:36:44.449653   34022 main.go:141] libmachine: (ha-631834-m02) Calling .Create
	I0927 00:36:44.449792   34022 main.go:141] libmachine: (ha-631834-m02) Creating KVM machine...
	I0927 00:36:44.451021   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found existing default KVM network
	I0927 00:36:44.451178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found existing private KVM network mk-ha-631834
	I0927 00:36:44.451353   34022 main.go:141] libmachine: (ha-631834-m02) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 ...
	I0927 00:36:44.451372   34022 main.go:141] libmachine: (ha-631834-m02) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:36:44.451445   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.451350   34386 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:44.451537   34022 main.go:141] libmachine: (ha-631834-m02) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:36:44.687379   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.687222   34386 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa...
	I0927 00:36:44.751062   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.750967   34386 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/ha-631834-m02.rawdisk...
	I0927 00:36:44.751087   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Writing magic tar header
	I0927 00:36:44.751100   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Writing SSH key tar header
	I0927 00:36:44.751178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.751110   34386 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 ...
	I0927 00:36:44.751293   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02
	I0927 00:36:44.751324   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:36:44.751344   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 (perms=drwx------)
	I0927 00:36:44.751365   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:36:44.751378   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:36:44.751392   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:36:44.751400   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:36:44.751408   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:44.751425   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:36:44.751434   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:36:44.751446   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:36:44.751456   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:36:44.751467   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home
	I0927 00:36:44.751479   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Skipping /home - not owner
	I0927 00:36:44.751504   34022 main.go:141] libmachine: (ha-631834-m02) Creating domain...
	I0927 00:36:44.752461   34022 main.go:141] libmachine: (ha-631834-m02) define libvirt domain using xml: 
	I0927 00:36:44.752482   34022 main.go:141] libmachine: (ha-631834-m02) <domain type='kvm'>
	I0927 00:36:44.752492   34022 main.go:141] libmachine: (ha-631834-m02)   <name>ha-631834-m02</name>
	I0927 00:36:44.752511   34022 main.go:141] libmachine: (ha-631834-m02)   <memory unit='MiB'>2200</memory>
	I0927 00:36:44.752523   34022 main.go:141] libmachine: (ha-631834-m02)   <vcpu>2</vcpu>
	I0927 00:36:44.752535   34022 main.go:141] libmachine: (ha-631834-m02)   <features>
	I0927 00:36:44.752546   34022 main.go:141] libmachine: (ha-631834-m02)     <acpi/>
	I0927 00:36:44.752559   34022 main.go:141] libmachine: (ha-631834-m02)     <apic/>
	I0927 00:36:44.752569   34022 main.go:141] libmachine: (ha-631834-m02)     <pae/>
	I0927 00:36:44.752577   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.752583   34022 main.go:141] libmachine: (ha-631834-m02)   </features>
	I0927 00:36:44.752589   34022 main.go:141] libmachine: (ha-631834-m02)   <cpu mode='host-passthrough'>
	I0927 00:36:44.752594   34022 main.go:141] libmachine: (ha-631834-m02)   
	I0927 00:36:44.752600   34022 main.go:141] libmachine: (ha-631834-m02)   </cpu>
	I0927 00:36:44.752605   34022 main.go:141] libmachine: (ha-631834-m02)   <os>
	I0927 00:36:44.752611   34022 main.go:141] libmachine: (ha-631834-m02)     <type>hvm</type>
	I0927 00:36:44.752616   34022 main.go:141] libmachine: (ha-631834-m02)     <boot dev='cdrom'/>
	I0927 00:36:44.752620   34022 main.go:141] libmachine: (ha-631834-m02)     <boot dev='hd'/>
	I0927 00:36:44.752628   34022 main.go:141] libmachine: (ha-631834-m02)     <bootmenu enable='no'/>
	I0927 00:36:44.752632   34022 main.go:141] libmachine: (ha-631834-m02)   </os>
	I0927 00:36:44.752654   34022 main.go:141] libmachine: (ha-631834-m02)   <devices>
	I0927 00:36:44.752673   34022 main.go:141] libmachine: (ha-631834-m02)     <disk type='file' device='cdrom'>
	I0927 00:36:44.752682   34022 main.go:141] libmachine: (ha-631834-m02)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/boot2docker.iso'/>
	I0927 00:36:44.752691   34022 main.go:141] libmachine: (ha-631834-m02)       <target dev='hdc' bus='scsi'/>
	I0927 00:36:44.752724   34022 main.go:141] libmachine: (ha-631834-m02)       <readonly/>
	I0927 00:36:44.752759   34022 main.go:141] libmachine: (ha-631834-m02)     </disk>
	I0927 00:36:44.752770   34022 main.go:141] libmachine: (ha-631834-m02)     <disk type='file' device='disk'>
	I0927 00:36:44.752786   34022 main.go:141] libmachine: (ha-631834-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:36:44.752803   34022 main.go:141] libmachine: (ha-631834-m02)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/ha-631834-m02.rawdisk'/>
	I0927 00:36:44.752813   34022 main.go:141] libmachine: (ha-631834-m02)       <target dev='hda' bus='virtio'/>
	I0927 00:36:44.752824   34022 main.go:141] libmachine: (ha-631834-m02)     </disk>
	I0927 00:36:44.752834   34022 main.go:141] libmachine: (ha-631834-m02)     <interface type='network'>
	I0927 00:36:44.752846   34022 main.go:141] libmachine: (ha-631834-m02)       <source network='mk-ha-631834'/>
	I0927 00:36:44.752860   34022 main.go:141] libmachine: (ha-631834-m02)       <model type='virtio'/>
	I0927 00:36:44.752870   34022 main.go:141] libmachine: (ha-631834-m02)     </interface>
	I0927 00:36:44.752876   34022 main.go:141] libmachine: (ha-631834-m02)     <interface type='network'>
	I0927 00:36:44.752888   34022 main.go:141] libmachine: (ha-631834-m02)       <source network='default'/>
	I0927 00:36:44.752898   34022 main.go:141] libmachine: (ha-631834-m02)       <model type='virtio'/>
	I0927 00:36:44.752907   34022 main.go:141] libmachine: (ha-631834-m02)     </interface>
	I0927 00:36:44.752917   34022 main.go:141] libmachine: (ha-631834-m02)     <serial type='pty'>
	I0927 00:36:44.752929   34022 main.go:141] libmachine: (ha-631834-m02)       <target port='0'/>
	I0927 00:36:44.752939   34022 main.go:141] libmachine: (ha-631834-m02)     </serial>
	I0927 00:36:44.752949   34022 main.go:141] libmachine: (ha-631834-m02)     <console type='pty'>
	I0927 00:36:44.752960   34022 main.go:141] libmachine: (ha-631834-m02)       <target type='serial' port='0'/>
	I0927 00:36:44.752971   34022 main.go:141] libmachine: (ha-631834-m02)     </console>
	I0927 00:36:44.752984   34022 main.go:141] libmachine: (ha-631834-m02)     <rng model='virtio'>
	I0927 00:36:44.753001   34022 main.go:141] libmachine: (ha-631834-m02)       <backend model='random'>/dev/random</backend>
	I0927 00:36:44.753018   34022 main.go:141] libmachine: (ha-631834-m02)     </rng>
	I0927 00:36:44.753035   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.753047   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.753059   34022 main.go:141] libmachine: (ha-631834-m02)   </devices>
	I0927 00:36:44.753068   34022 main.go:141] libmachine: (ha-631834-m02) </domain>
	I0927 00:36:44.753080   34022 main.go:141] libmachine: (ha-631834-m02) 
	I0927 00:36:44.759470   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:b2:c3:d6 in network default
	I0927 00:36:44.759943   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:44.759962   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring networks are active...
	I0927 00:36:44.760578   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring network default is active
	I0927 00:36:44.760849   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring network mk-ha-631834 is active
	I0927 00:36:44.761213   34022 main.go:141] libmachine: (ha-631834-m02) Getting domain xml...
	I0927 00:36:44.761860   34022 main.go:141] libmachine: (ha-631834-m02) Creating domain...
	I0927 00:36:45.965093   34022 main.go:141] libmachine: (ha-631834-m02) Waiting to get IP...
	I0927 00:36:45.965811   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:45.966210   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:45.966250   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:45.966193   34386 retry.go:31] will retry after 219.366954ms: waiting for machine to come up
	I0927 00:36:46.187549   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.188001   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.188031   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.187959   34386 retry.go:31] will retry after 344.351684ms: waiting for machine to come up
	I0927 00:36:46.533384   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.533893   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.533918   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.533845   34386 retry.go:31] will retry after 436.44682ms: waiting for machine to come up
	I0927 00:36:46.971366   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.971845   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.971881   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.971792   34386 retry.go:31] will retry after 518.722723ms: waiting for machine to come up
	I0927 00:36:47.492370   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:47.492814   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:47.492836   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:47.492761   34386 retry.go:31] will retry after 458.476026ms: waiting for machine to come up
	I0927 00:36:47.952367   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:47.952947   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:47.952968   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:47.952905   34386 retry.go:31] will retry after 873.835695ms: waiting for machine to come up
	I0927 00:36:48.827782   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:48.828192   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:48.828221   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:48.828139   34386 retry.go:31] will retry after 1.00855597s: waiting for machine to come up
	I0927 00:36:49.838599   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:49.838959   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:49.838982   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:49.838927   34386 retry.go:31] will retry after 1.38923332s: waiting for machine to come up
	I0927 00:36:51.230578   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:51.231036   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:51.231061   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:51.231006   34386 retry.go:31] will retry after 1.140830763s: waiting for machine to come up
	I0927 00:36:52.373231   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:52.373666   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:52.373692   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:52.373621   34386 retry.go:31] will retry after 2.064225387s: waiting for machine to come up
	I0927 00:36:54.440421   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:54.440877   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:54.440901   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:54.440817   34386 retry.go:31] will retry after 2.699234582s: waiting for machine to come up
	I0927 00:36:57.141531   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:57.141923   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:57.141944   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:57.141879   34386 retry.go:31] will retry after 2.876736711s: waiting for machine to come up
	I0927 00:37:00.019979   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:00.020397   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:37:00.020415   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:37:00.020358   34386 retry.go:31] will retry after 2.739686124s: waiting for machine to come up
	I0927 00:37:02.761974   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:02.762423   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:37:02.762478   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:37:02.762348   34386 retry.go:31] will retry after 3.780270458s: waiting for machine to come up
	I0927 00:37:06.544970   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.545486   34022 main.go:141] libmachine: (ha-631834-m02) Found IP for machine: 192.168.39.184
	I0927 00:37:06.545515   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has current primary IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.545524   34022 main.go:141] libmachine: (ha-631834-m02) Reserving static IP address...
	I0927 00:37:06.545889   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find host DHCP lease matching {name: "ha-631834-m02", mac: "52:54:00:f9:6f:a2", ip: "192.168.39.184"} in network mk-ha-631834
	I0927 00:37:06.617028   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Getting to WaitForSSH function...
	I0927 00:37:06.617058   34022 main.go:141] libmachine: (ha-631834-m02) Reserved static IP address: 192.168.39.184
	I0927 00:37:06.617127   34022 main.go:141] libmachine: (ha-631834-m02) Waiting for SSH to be available...
	I0927 00:37:06.619198   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.619549   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834
	I0927 00:37:06.619573   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find defined IP address of network mk-ha-631834 interface with MAC address 52:54:00:f9:6f:a2
	I0927 00:37:06.619711   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH client type: external
	I0927 00:37:06.619738   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa (-rw-------)
	I0927 00:37:06.619767   34022 main.go:141] libmachine: (ha-631834-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:37:06.619784   34022 main.go:141] libmachine: (ha-631834-m02) DBG | About to run SSH command:
	I0927 00:37:06.619798   34022 main.go:141] libmachine: (ha-631834-m02) DBG | exit 0
	I0927 00:37:06.623260   34022 main.go:141] libmachine: (ha-631834-m02) DBG | SSH cmd err, output: exit status 255: 
	I0927 00:37:06.623273   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0927 00:37:06.623281   34022 main.go:141] libmachine: (ha-631834-m02) DBG | command : exit 0
	I0927 00:37:06.623290   34022 main.go:141] libmachine: (ha-631834-m02) DBG | err     : exit status 255
	I0927 00:37:06.623297   34022 main.go:141] libmachine: (ha-631834-m02) DBG | output  : 
	I0927 00:37:09.623967   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Getting to WaitForSSH function...
	I0927 00:37:09.626758   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.627251   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.627285   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.627413   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH client type: external
	I0927 00:37:09.627435   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa (-rw-------)
	I0927 00:37:09.627472   34022 main.go:141] libmachine: (ha-631834-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:37:09.627484   34022 main.go:141] libmachine: (ha-631834-m02) DBG | About to run SSH command:
	I0927 00:37:09.627495   34022 main.go:141] libmachine: (ha-631834-m02) DBG | exit 0
	I0927 00:37:09.751226   34022 main.go:141] libmachine: (ha-631834-m02) DBG | SSH cmd err, output: <nil>: 
	I0927 00:37:09.751504   34022 main.go:141] libmachine: (ha-631834-m02) KVM machine creation complete!
	I0927 00:37:09.751804   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:37:09.752329   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:09.752502   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:09.752645   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:37:09.752657   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetState
	I0927 00:37:09.753685   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:37:09.753695   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:37:09.753702   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:37:09.753707   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.755579   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.755850   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.755881   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.755998   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.756145   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.756274   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.756413   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.756589   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.756825   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.756839   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:37:09.854682   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:37:09.854708   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:37:09.854718   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.857509   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.857847   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.857874   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.857977   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.858161   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.858335   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.858490   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.858645   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.858795   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.858806   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:37:09.960162   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:37:09.960233   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:37:09.960242   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:37:09.960250   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:09.960507   34022 buildroot.go:166] provisioning hostname "ha-631834-m02"
	I0927 00:37:09.960550   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:09.960744   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.963548   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.963921   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.963943   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.964085   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.964256   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.964403   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.964542   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.964683   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.964874   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.964887   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834-m02 && echo "ha-631834-m02" | sudo tee /etc/hostname
	I0927 00:37:10.077518   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834-m02
	
	I0927 00:37:10.077550   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.080178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.080540   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.080573   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.080695   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.080848   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.080953   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.081049   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.081209   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.081417   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.081444   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:37:10.188307   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:37:10.188350   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:37:10.188371   34022 buildroot.go:174] setting up certificates
	I0927 00:37:10.188381   34022 provision.go:84] configureAuth start
	I0927 00:37:10.188395   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:10.188651   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.191227   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.191601   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.191637   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.191838   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.194575   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.195339   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.195365   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.195518   34022 provision.go:143] copyHostCerts
	I0927 00:37:10.195546   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:37:10.195575   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:37:10.195584   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:37:10.195648   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:37:10.195719   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:37:10.195736   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:37:10.195740   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:37:10.195763   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:37:10.195803   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:37:10.195819   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:37:10.195824   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:37:10.195844   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:37:10.195907   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834-m02 san=[127.0.0.1 192.168.39.184 ha-631834-m02 localhost minikube]
	I0927 00:37:10.245727   34022 provision.go:177] copyRemoteCerts
	I0927 00:37:10.245778   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:37:10.245798   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.248269   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.248597   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.248623   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.248784   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.248960   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.249076   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.249199   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.331285   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:37:10.331361   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:37:10.357400   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:37:10.357470   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:37:10.381613   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:37:10.381680   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:37:10.404641   34022 provision.go:87] duration metric: took 216.247596ms to configureAuth
	I0927 00:37:10.404666   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:37:10.404826   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:10.404895   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.407260   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.407584   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.407606   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.407813   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.407999   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.408158   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.408283   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.408456   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.408663   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.408684   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:37:10.641711   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:37:10.641732   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:37:10.641740   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetURL
	I0927 00:37:10.642949   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using libvirt version 6000000
	I0927 00:37:10.645171   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.645559   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.645584   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.645775   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:37:10.645789   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:37:10.645796   34022 client.go:171] duration metric: took 26.196945191s to LocalClient.Create
	I0927 00:37:10.645815   34022 start.go:167] duration metric: took 26.197002465s to libmachine.API.Create "ha-631834"
	I0927 00:37:10.645824   34022 start.go:293] postStartSetup for "ha-631834-m02" (driver="kvm2")
	I0927 00:37:10.645834   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:37:10.645850   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.646066   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:37:10.646101   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.648185   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.648596   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.648623   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.648794   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.648930   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.649065   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.649169   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.730488   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:37:10.734725   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:37:10.734745   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:37:10.734795   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:37:10.734865   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:37:10.734874   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:37:10.734948   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:37:10.746203   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:37:10.770218   34022 start.go:296] duration metric: took 124.382795ms for postStartSetup
	I0927 00:37:10.770261   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:37:10.770829   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.773277   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.773651   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.773680   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.773884   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:37:10.774086   34022 start.go:128] duration metric: took 26.343999443s to createHost
	I0927 00:37:10.774110   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.775957   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.776258   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.776284   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.776391   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.776554   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.776671   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.776790   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.776904   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.777080   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.777095   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:37:10.876642   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397430.856709211
	
	I0927 00:37:10.876668   34022 fix.go:216] guest clock: 1727397430.856709211
	I0927 00:37:10.876675   34022 fix.go:229] Guest: 2024-09-27 00:37:10.856709211 +0000 UTC Remote: 2024-09-27 00:37:10.774098108 +0000 UTC m=+70.074597703 (delta=82.611103ms)
	I0927 00:37:10.876688   34022 fix.go:200] guest clock delta is within tolerance: 82.611103ms
	I0927 00:37:10.876693   34022 start.go:83] releasing machines lock for "ha-631834-m02", held for 26.446717018s
	I0927 00:37:10.876711   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.876935   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.879789   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.880133   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.880157   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.882420   34022 out.go:177] * Found network options:
	I0927 00:37:10.883855   34022 out.go:177]   - NO_PROXY=192.168.39.4
	W0927 00:37:10.885148   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:37:10.885174   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885627   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885793   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885874   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:37:10.885914   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	W0927 00:37:10.885995   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:37:10.886064   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:37:10.886085   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.888528   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888647   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888905   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.888931   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888961   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.888976   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.889083   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.889235   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.889256   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.889362   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.889427   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.889490   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.889571   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.889594   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:11.136304   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:37:11.142079   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:37:11.142147   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:37:11.158578   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:37:11.158606   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:37:11.158676   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:37:11.174779   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:37:11.188680   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:37:11.188733   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:37:11.201858   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:37:11.214760   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:37:11.327367   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:37:11.490795   34022 docker.go:233] disabling docker service ...
	I0927 00:37:11.490853   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:37:11.505571   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:37:11.518373   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:37:11.629152   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:37:11.740768   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:37:11.754787   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:37:11.773038   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:37:11.773110   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.783470   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:37:11.783521   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.793940   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.804039   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.814196   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:37:11.824547   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.834569   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.850743   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.861436   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:37:11.870606   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:37:11.870649   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:37:11.885756   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:37:11.897194   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:12.020445   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:37:12.107882   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:37:12.107937   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:37:12.113014   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:37:12.113056   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:37:12.116696   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:37:12.156627   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:37:12.156716   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:37:12.184776   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:37:12.214285   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:37:12.215642   34022 out.go:177]   - env NO_PROXY=192.168.39.4
	I0927 00:37:12.216858   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:12.219534   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:12.219884   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:12.219910   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:12.220066   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:37:12.224146   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:37:12.236530   34022 mustload.go:65] Loading cluster: ha-631834
	I0927 00:37:12.236743   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:12.236988   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:12.237013   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:12.251316   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0927 00:37:12.251795   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:12.252245   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:12.252265   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:12.252568   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:12.252747   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:37:12.254195   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:37:12.254474   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:12.254499   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:12.268676   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45197
	I0927 00:37:12.269168   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:12.269589   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:12.269610   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:12.269894   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:12.270042   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:37:12.270195   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.184
	I0927 00:37:12.270209   34022 certs.go:194] generating shared ca certs ...
	I0927 00:37:12.270227   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.270367   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:37:12.270424   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:37:12.270437   34022 certs.go:256] generating profile certs ...
	I0927 00:37:12.270535   34022 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:37:12.270563   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f
	I0927 00:37:12.270582   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.184 192.168.39.254]
	I0927 00:37:12.380622   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f ...
	I0927 00:37:12.380651   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f: {Name:mkabbfeb402264582fd8eeda0c7047e582633f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.380811   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f ...
	I0927 00:37:12.380824   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f: {Name:mkfa43c1b86669a0c9318db325b03ab1136e574e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.380891   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:37:12.381022   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:37:12.381184   34022 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:37:12.381199   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:37:12.381212   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:37:12.381225   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:37:12.381237   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:37:12.381255   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:37:12.381268   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:37:12.381280   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:37:12.381292   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:37:12.381342   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:37:12.381368   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:37:12.381377   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:37:12.381397   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:37:12.381429   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:37:12.381449   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:37:12.381485   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:37:12.381525   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.381538   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.381559   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:37:12.381589   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:37:12.384914   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:12.385337   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:37:12.385363   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:12.385520   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:37:12.385695   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:37:12.385849   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:37:12.385970   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:37:12.463600   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 00:37:12.469050   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 00:37:12.480901   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 00:37:12.485274   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 00:37:12.495588   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 00:37:12.499742   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 00:37:12.511921   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 00:37:12.515813   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0927 00:37:12.525592   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 00:37:12.529819   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 00:37:12.540367   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 00:37:12.544115   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 00:37:12.559955   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:37:12.585679   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:37:12.608898   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:37:12.631565   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:37:12.654159   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 00:37:12.677901   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:37:12.701023   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:37:12.723805   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:37:12.746428   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:37:12.770481   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:37:12.794514   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:37:12.817381   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 00:37:12.833441   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 00:37:12.849543   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 00:37:12.866255   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0927 00:37:12.882530   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 00:37:12.898460   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 00:37:12.914236   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 00:37:12.929892   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:37:12.935443   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:37:12.945938   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.950422   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.950473   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.956276   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:37:12.967207   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:37:12.978472   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.982807   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.982859   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.988439   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:37:12.999183   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:37:13.010278   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.014700   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.014750   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.020522   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:37:13.032168   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:37:13.036252   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:37:13.036310   34022 kubeadm.go:934] updating node {m02 192.168.39.184 8443 v1.31.1 crio true true} ...
	I0927 00:37:13.036391   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:37:13.036418   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:37:13.036450   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:37:13.053748   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:37:13.053813   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:37:13.053866   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:37:13.063832   34022 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 00:37:13.063894   34022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 00:37:13.073341   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 00:37:13.073367   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:37:13.073425   34022 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0927 00:37:13.073468   34022 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0927 00:37:13.073430   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:37:13.077722   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 00:37:13.077745   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 00:37:14.061924   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:14.080321   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:37:14.080396   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:37:14.084997   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 00:37:14.085031   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 00:37:14.368132   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:37:14.368235   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:37:14.380382   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 00:37:14.380424   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 00:37:14.663959   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 00:37:14.673981   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 00:37:14.690872   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:37:14.708362   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 00:37:14.725181   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:37:14.729204   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:37:14.741822   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:14.857927   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:37:14.875145   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:37:14.875529   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:14.875570   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:14.890402   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0927 00:37:14.890838   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:14.891373   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:14.891394   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:14.891729   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:14.891911   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:37:14.892044   34022 start.go:317] joinCluster: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:37:14.892172   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 00:37:14.892194   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:37:14.894983   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:14.895381   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:37:14.895416   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:14.895524   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:37:14.895647   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:37:14.895747   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:37:14.895865   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:37:15.056944   34022 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:37:15.056990   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mlxu9z.6ua5c3whncxwr8h0 --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443"
	I0927 00:37:37.826684   34022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mlxu9z.6ua5c3whncxwr8h0 --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443": (22.769665782s)
	I0927 00:37:37.826721   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 00:37:38.375369   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834-m02 minikube.k8s.io/updated_at=2024_09_27T00_37_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=false
	I0927 00:37:38.497089   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-631834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 00:37:38.638589   34022 start.go:319] duration metric: took 23.746539088s to joinCluster
	I0927 00:37:38.638713   34022 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:37:38.638954   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:38.640009   34022 out.go:177] * Verifying Kubernetes components...
	I0927 00:37:38.641589   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:38.888956   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:37:38.910605   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:37:38.910930   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 00:37:38.911023   34022 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0927 00:37:38.911358   34022 node_ready.go:35] waiting up to 6m0s for node "ha-631834-m02" to be "Ready" ...
	I0927 00:37:38.911504   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:38.911518   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:38.911531   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:38.911540   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:38.925042   34022 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0927 00:37:39.412340   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:39.412364   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:39.412376   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:39.412382   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:39.415703   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:39.912301   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:39.912323   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:39.912335   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:39.912340   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:39.917016   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:37:40.411994   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:40.412018   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:40.412030   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:40.412034   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:40.415279   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:40.912076   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:40.912093   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:40.912101   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:40.912106   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:40.915241   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:40.915920   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:41.412300   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:41.412322   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:41.412334   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:41.412339   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:41.416161   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:41.912228   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:41.912252   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:41.912262   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:41.912271   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:41.915784   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:42.411624   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:42.411645   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:42.411652   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:42.411658   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:42.415042   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:42.911632   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:42.911657   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:42.911669   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:42.911673   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:42.915043   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:43.412494   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:43.412511   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:43.412518   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:43.412521   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:43.416206   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:43.417057   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:43.912499   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:43.912518   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:43.912526   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:43.912531   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:43.916624   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:37:44.412544   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:44.412562   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:44.412569   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:44.412573   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:44.416020   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:44.912402   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:44.912423   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:44.912433   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:44.912437   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.001404   34022 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I0927 00:37:45.412218   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:45.412235   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:45.412242   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:45.412246   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.415114   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:45.911872   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:45.911892   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:45.911899   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:45.911903   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.915117   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:45.915711   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:46.412115   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:46.412135   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:46.412142   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:46.412147   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:46.415578   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:46.911759   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:46.911782   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:46.911789   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:46.911795   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:46.914976   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.411947   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:47.411969   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:47.411976   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:47.411981   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:47.415038   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.911959   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:47.911982   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:47.911994   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:47.911999   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:47.915156   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.915877   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:48.411937   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:48.411963   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:48.411972   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:48.411983   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:48.414801   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:48.911631   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:48.911652   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:48.911660   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:48.911665   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:48.914737   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:49.411675   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:49.411696   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:49.411704   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:49.411709   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:49.414697   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:49.911696   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:49.911715   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:49.911725   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:49.911731   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:49.914887   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:50.411769   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:50.411790   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:50.411797   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:50.411800   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:50.415046   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:50.415915   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:50.912247   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:50.912268   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:50.912275   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:50.912279   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:50.915493   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:51.412530   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:51.412551   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:51.412559   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:51.412562   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:51.415870   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:51.911834   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:51.911856   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:51.911863   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:51.911868   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:51.914920   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.411866   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:52.411886   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:52.411894   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:52.411897   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:52.415280   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.912337   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:52.912367   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:52.912379   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:52.912391   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:52.915440   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.916052   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:53.411693   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:53.411714   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:53.411722   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:53.411726   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:53.415015   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:53.912191   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:53.912210   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:53.912218   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:53.912222   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:53.914959   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:54.412320   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:54.412340   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:54.412348   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:54.412351   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:54.415317   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:54.911810   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:54.911833   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:54.911841   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:54.911844   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:54.914791   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:55.411928   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:55.411949   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:55.411957   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:55.411960   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:55.414926   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:55.415763   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:55.911749   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:55.911770   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:55.911777   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:55.911781   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:55.915450   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.412537   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.412558   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.412566   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.412569   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.416170   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.911854   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.911874   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.911883   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.911887   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.914948   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.915561   34022 node_ready.go:49] node "ha-631834-m02" has status "Ready":"True"
	I0927 00:37:56.915579   34022 node_ready.go:38] duration metric: took 18.004197532s for node "ha-631834-m02" to be "Ready" ...
	I0927 00:37:56.915587   34022 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:37:56.915672   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:37:56.915682   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.915688   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.915691   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.928535   34022 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0927 00:37:56.934559   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.934630   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-479dv
	I0927 00:37:56.934641   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.934652   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.934657   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.938001   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.940808   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.940821   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.940828   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.940832   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.943740   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.944239   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.944253   34022 pod_ready.go:82] duration metric: took 9.674838ms for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.944261   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.944310   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kg8kf
	I0927 00:37:56.944318   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.944324   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.944332   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.946515   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.947127   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.947143   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.947150   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.947157   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.949055   34022 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 00:37:56.949993   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.950013   34022 pod_ready.go:82] duration metric: took 5.744559ms for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.950024   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.950083   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834
	I0927 00:37:56.950095   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.950105   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.950113   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.952861   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.953382   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.953398   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.953408   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.953415   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.955580   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.955956   34022 pod_ready.go:93] pod "etcd-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.955972   34022 pod_ready.go:82] duration metric: took 5.938111ms for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.955979   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.956028   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m02
	I0927 00:37:56.956037   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.956044   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.956048   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.958144   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.958682   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.958694   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.958702   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.958707   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.960779   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.961169   34022 pod_ready.go:93] pod "etcd-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.961183   34022 pod_ready.go:82] duration metric: took 5.19893ms for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.961195   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.112502   34022 request.go:632] Waited for 151.252386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:37:57.112559   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:37:57.112565   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.112572   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.112576   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.115770   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.312171   34022 request.go:632] Waited for 195.713659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:57.312216   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:57.312221   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.312229   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.312232   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.315816   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.316859   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:57.316874   34022 pod_ready.go:82] duration metric: took 355.673456ms for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.316882   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.511936   34022 request.go:632] Waited for 194.980446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:37:57.512026   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:37:57.512043   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.512054   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.512063   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.515153   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.712254   34022 request.go:632] Waited for 196.382367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:57.712356   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:57.712368   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.712378   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.712386   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.716196   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.716807   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:57.716829   34022 pod_ready.go:82] duration metric: took 399.939153ms for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.716844   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.912822   34022 request.go:632] Waited for 195.90758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:37:57.912904   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:37:57.912912   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.912922   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.912933   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.916051   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.112039   34022 request.go:632] Waited for 195.329642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.112122   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.112127   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.112136   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.112143   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.115508   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.115975   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.115994   34022 pod_ready.go:82] duration metric: took 399.142534ms for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.116003   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.312103   34022 request.go:632] Waited for 196.038569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:37:58.312152   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:37:58.312162   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.312170   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.312174   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.314795   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:58.511939   34022 request.go:632] Waited for 196.327635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:58.511988   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:58.511994   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.512003   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.512010   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.515560   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.516257   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.516284   34022 pod_ready.go:82] duration metric: took 400.272757ms for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.516296   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.712241   34022 request.go:632] Waited for 195.877878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:37:58.712303   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:37:58.712310   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.712331   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.712385   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.715681   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.911944   34022 request.go:632] Waited for 195.32001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.912017   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.912022   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.912029   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.912033   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.914780   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:58.915682   34022 pod_ready.go:93] pod "kube-proxy-7n244" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.915708   34022 pod_ready.go:82] duration metric: took 399.399725ms for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.915722   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.112621   34022 request.go:632] Waited for 196.830611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:37:59.112695   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:37:59.112702   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.112711   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.112717   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.116056   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.312264   34022 request.go:632] Waited for 195.403458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:59.312315   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:59.312320   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.312371   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.312391   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.315926   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.316477   34022 pod_ready.go:93] pod "kube-proxy-x2hvh" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:59.316499   34022 pod_ready.go:82] duration metric: took 400.770291ms for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.316508   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.511836   34022 request.go:632] Waited for 195.271471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:37:59.511920   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:37:59.511931   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.511939   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.511948   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.515136   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.712221   34022 request.go:632] Waited for 196.384821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:59.712289   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:59.712294   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.712302   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.712309   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.715391   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.716333   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:59.716356   34022 pod_ready.go:82] duration metric: took 399.841544ms for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.716375   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.912751   34022 request.go:632] Waited for 196.300793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:37:59.912870   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:37:59.912884   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.912894   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.912902   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.916551   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:00.112471   34022 request.go:632] Waited for 195.315992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:38:00.112520   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:38:00.112525   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.112532   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.112535   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.115509   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:38:00.116194   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:38:00.116211   34022 pod_ready.go:82] duration metric: took 399.824793ms for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:38:00.116221   34022 pod_ready.go:39] duration metric: took 3.200608197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:38:00.116243   34022 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:38:00.116294   34022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:38:00.135868   34022 api_server.go:72] duration metric: took 21.497115723s to wait for apiserver process to appear ...
	I0927 00:38:00.135895   34022 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:38:00.135917   34022 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0927 00:38:00.140183   34022 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0927 00:38:00.140253   34022 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0927 00:38:00.140266   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.140276   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.140279   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.141056   34022 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0927 00:38:00.141139   34022 api_server.go:141] control plane version: v1.31.1
	I0927 00:38:00.141154   34022 api_server.go:131] duration metric: took 5.252594ms to wait for apiserver health ...
	I0927 00:38:00.141160   34022 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:38:00.312479   34022 request.go:632] Waited for 171.239847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.312534   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.312539   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.312546   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.312551   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.317803   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:38:00.322748   34022 system_pods.go:59] 17 kube-system pods found
	I0927 00:38:00.322780   34022 system_pods.go:61] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:38:00.322785   34022 system_pods.go:61] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:38:00.322788   34022 system_pods.go:61] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:38:00.322791   34022 system_pods.go:61] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:38:00.322794   34022 system_pods.go:61] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:38:00.322797   34022 system_pods.go:61] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:38:00.322800   34022 system_pods.go:61] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:38:00.322804   34022 system_pods.go:61] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:38:00.322807   34022 system_pods.go:61] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:38:00.322811   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:38:00.322814   34022 system_pods.go:61] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:38:00.322817   34022 system_pods.go:61] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:38:00.322819   34022 system_pods.go:61] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:38:00.322822   34022 system_pods.go:61] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:38:00.322826   34022 system_pods.go:61] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:38:00.322829   34022 system_pods.go:61] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:38:00.322832   34022 system_pods.go:61] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:38:00.322837   34022 system_pods.go:74] duration metric: took 181.672494ms to wait for pod list to return data ...
	I0927 00:38:00.322843   34022 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:38:00.512235   34022 request.go:632] Waited for 189.330159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:38:00.512297   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:38:00.512302   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.512309   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.512313   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.517819   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:38:00.518071   34022 default_sa.go:45] found service account: "default"
	I0927 00:38:00.518095   34022 default_sa.go:55] duration metric: took 195.245876ms for default service account to be created ...
	I0927 00:38:00.518107   34022 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:38:00.712113   34022 request.go:632] Waited for 193.916786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.712176   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.712183   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.712193   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.712199   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.716946   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:00.721442   34022 system_pods.go:86] 17 kube-system pods found
	I0927 00:38:00.721467   34022 system_pods.go:89] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:38:00.721472   34022 system_pods.go:89] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:38:00.721476   34022 system_pods.go:89] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:38:00.721479   34022 system_pods.go:89] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:38:00.721482   34022 system_pods.go:89] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:38:00.721486   34022 system_pods.go:89] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:38:00.721489   34022 system_pods.go:89] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:38:00.721493   34022 system_pods.go:89] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:38:00.721496   34022 system_pods.go:89] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:38:00.721500   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:38:00.721503   34022 system_pods.go:89] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:38:00.721506   34022 system_pods.go:89] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:38:00.721510   34022 system_pods.go:89] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:38:00.721512   34022 system_pods.go:89] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:38:00.721515   34022 system_pods.go:89] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:38:00.721518   34022 system_pods.go:89] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:38:00.721520   34022 system_pods.go:89] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:38:00.721525   34022 system_pods.go:126] duration metric: took 203.413353ms to wait for k8s-apps to be running ...
	I0927 00:38:00.721531   34022 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:38:00.721569   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:00.736846   34022 system_svc.go:56] duration metric: took 15.307058ms WaitForService to wait for kubelet
	I0927 00:38:00.736868   34022 kubeadm.go:582] duration metric: took 22.09812477s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:38:00.736883   34022 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:38:00.912548   34022 request.go:632] Waited for 175.604909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0927 00:38:00.912614   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0927 00:38:00.912620   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.912629   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.912637   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.916934   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:00.918457   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:38:00.918481   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:38:00.918495   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:38:00.918500   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:38:00.918505   34022 node_conditions.go:105] duration metric: took 181.617208ms to run NodePressure ...
	I0927 00:38:00.918514   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:38:00.918536   34022 start.go:255] writing updated cluster config ...
	I0927 00:38:00.920669   34022 out.go:201] 
	I0927 00:38:00.922354   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:00.922437   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:00.924101   34022 out.go:177] * Starting "ha-631834-m03" control-plane node in "ha-631834" cluster
	I0927 00:38:00.925280   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:38:00.925296   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:38:00.925400   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:38:00.925413   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:38:00.925494   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:00.925653   34022 start.go:360] acquireMachinesLock for ha-631834-m03: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:38:00.925710   34022 start.go:364] duration metric: took 40.934µs to acquireMachinesLock for "ha-631834-m03"
	I0927 00:38:00.925731   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:00.925834   34022 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0927 00:38:00.927492   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:38:00.927590   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:00.927628   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:00.942435   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0927 00:38:00.942900   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:00.943351   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:00.943370   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:00.943711   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:00.943853   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:00.943978   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:00.944142   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:38:00.944167   34022 client.go:168] LocalClient.Create starting
	I0927 00:38:00.944197   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:38:00.944234   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:38:00.944249   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:38:00.944293   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:38:00.944314   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:38:00.944324   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:38:00.944337   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:38:00.944345   34022 main.go:141] libmachine: (ha-631834-m03) Calling .PreCreateCheck
	I0927 00:38:00.944509   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:00.944854   34022 main.go:141] libmachine: Creating machine...
	I0927 00:38:00.944866   34022 main.go:141] libmachine: (ha-631834-m03) Calling .Create
	I0927 00:38:00.945006   34022 main.go:141] libmachine: (ha-631834-m03) Creating KVM machine...
	I0927 00:38:00.946130   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found existing default KVM network
	I0927 00:38:00.946246   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found existing private KVM network mk-ha-631834
	I0927 00:38:00.946370   34022 main.go:141] libmachine: (ha-631834-m03) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 ...
	I0927 00:38:00.946396   34022 main.go:141] libmachine: (ha-631834-m03) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:38:00.946450   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:00.946342   34779 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:38:00.946538   34022 main.go:141] libmachine: (ha-631834-m03) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:38:01.172256   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.172126   34779 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa...
	I0927 00:38:01.300878   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.300754   34779 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/ha-631834-m03.rawdisk...
	I0927 00:38:01.300913   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Writing magic tar header
	I0927 00:38:01.300930   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Writing SSH key tar header
	I0927 00:38:01.300947   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.300907   34779 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 ...
	I0927 00:38:01.301077   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03
	I0927 00:38:01.301177   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:38:01.301201   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 (perms=drwx------)
	I0927 00:38:01.301210   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:38:01.301221   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:38:01.301229   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:38:01.301238   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:38:01.301243   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home
	I0927 00:38:01.301252   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Skipping /home - not owner
	I0927 00:38:01.301261   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:38:01.301272   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:38:01.301340   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:38:01.301369   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:38:01.301385   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:38:01.301397   34022 main.go:141] libmachine: (ha-631834-m03) Creating domain...
	I0927 00:38:01.302347   34022 main.go:141] libmachine: (ha-631834-m03) define libvirt domain using xml: 
	I0927 00:38:01.302369   34022 main.go:141] libmachine: (ha-631834-m03) <domain type='kvm'>
	I0927 00:38:01.302379   34022 main.go:141] libmachine: (ha-631834-m03)   <name>ha-631834-m03</name>
	I0927 00:38:01.302387   34022 main.go:141] libmachine: (ha-631834-m03)   <memory unit='MiB'>2200</memory>
	I0927 00:38:01.302396   34022 main.go:141] libmachine: (ha-631834-m03)   <vcpu>2</vcpu>
	I0927 00:38:01.302403   34022 main.go:141] libmachine: (ha-631834-m03)   <features>
	I0927 00:38:01.302416   34022 main.go:141] libmachine: (ha-631834-m03)     <acpi/>
	I0927 00:38:01.302423   34022 main.go:141] libmachine: (ha-631834-m03)     <apic/>
	I0927 00:38:01.302428   34022 main.go:141] libmachine: (ha-631834-m03)     <pae/>
	I0927 00:38:01.302434   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302439   34022 main.go:141] libmachine: (ha-631834-m03)   </features>
	I0927 00:38:01.302446   34022 main.go:141] libmachine: (ha-631834-m03)   <cpu mode='host-passthrough'>
	I0927 00:38:01.302451   34022 main.go:141] libmachine: (ha-631834-m03)   
	I0927 00:38:01.302457   34022 main.go:141] libmachine: (ha-631834-m03)   </cpu>
	I0927 00:38:01.302482   34022 main.go:141] libmachine: (ha-631834-m03)   <os>
	I0927 00:38:01.302504   34022 main.go:141] libmachine: (ha-631834-m03)     <type>hvm</type>
	I0927 00:38:01.302517   34022 main.go:141] libmachine: (ha-631834-m03)     <boot dev='cdrom'/>
	I0927 00:38:01.302528   34022 main.go:141] libmachine: (ha-631834-m03)     <boot dev='hd'/>
	I0927 00:38:01.302541   34022 main.go:141] libmachine: (ha-631834-m03)     <bootmenu enable='no'/>
	I0927 00:38:01.302550   34022 main.go:141] libmachine: (ha-631834-m03)   </os>
	I0927 00:38:01.302558   34022 main.go:141] libmachine: (ha-631834-m03)   <devices>
	I0927 00:38:01.302567   34022 main.go:141] libmachine: (ha-631834-m03)     <disk type='file' device='cdrom'>
	I0927 00:38:01.302594   34022 main.go:141] libmachine: (ha-631834-m03)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/boot2docker.iso'/>
	I0927 00:38:01.302616   34022 main.go:141] libmachine: (ha-631834-m03)       <target dev='hdc' bus='scsi'/>
	I0927 00:38:01.302629   34022 main.go:141] libmachine: (ha-631834-m03)       <readonly/>
	I0927 00:38:01.302639   34022 main.go:141] libmachine: (ha-631834-m03)     </disk>
	I0927 00:38:01.302651   34022 main.go:141] libmachine: (ha-631834-m03)     <disk type='file' device='disk'>
	I0927 00:38:01.302663   34022 main.go:141] libmachine: (ha-631834-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:38:01.302681   34022 main.go:141] libmachine: (ha-631834-m03)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/ha-631834-m03.rawdisk'/>
	I0927 00:38:01.302695   34022 main.go:141] libmachine: (ha-631834-m03)       <target dev='hda' bus='virtio'/>
	I0927 00:38:01.302706   34022 main.go:141] libmachine: (ha-631834-m03)     </disk>
	I0927 00:38:01.302713   34022 main.go:141] libmachine: (ha-631834-m03)     <interface type='network'>
	I0927 00:38:01.302718   34022 main.go:141] libmachine: (ha-631834-m03)       <source network='mk-ha-631834'/>
	I0927 00:38:01.302725   34022 main.go:141] libmachine: (ha-631834-m03)       <model type='virtio'/>
	I0927 00:38:01.302733   34022 main.go:141] libmachine: (ha-631834-m03)     </interface>
	I0927 00:38:01.302743   34022 main.go:141] libmachine: (ha-631834-m03)     <interface type='network'>
	I0927 00:38:01.302756   34022 main.go:141] libmachine: (ha-631834-m03)       <source network='default'/>
	I0927 00:38:01.302769   34022 main.go:141] libmachine: (ha-631834-m03)       <model type='virtio'/>
	I0927 00:38:01.302780   34022 main.go:141] libmachine: (ha-631834-m03)     </interface>
	I0927 00:38:01.302786   34022 main.go:141] libmachine: (ha-631834-m03)     <serial type='pty'>
	I0927 00:38:01.302798   34022 main.go:141] libmachine: (ha-631834-m03)       <target port='0'/>
	I0927 00:38:01.302806   34022 main.go:141] libmachine: (ha-631834-m03)     </serial>
	I0927 00:38:01.302811   34022 main.go:141] libmachine: (ha-631834-m03)     <console type='pty'>
	I0927 00:38:01.302824   34022 main.go:141] libmachine: (ha-631834-m03)       <target type='serial' port='0'/>
	I0927 00:38:01.302835   34022 main.go:141] libmachine: (ha-631834-m03)     </console>
	I0927 00:38:01.302846   34022 main.go:141] libmachine: (ha-631834-m03)     <rng model='virtio'>
	I0927 00:38:01.302853   34022 main.go:141] libmachine: (ha-631834-m03)       <backend model='random'>/dev/random</backend>
	I0927 00:38:01.302860   34022 main.go:141] libmachine: (ha-631834-m03)     </rng>
	I0927 00:38:01.302867   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302871   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302876   34022 main.go:141] libmachine: (ha-631834-m03)   </devices>
	I0927 00:38:01.302885   34022 main.go:141] libmachine: (ha-631834-m03) </domain>
	I0927 00:38:01.302891   34022 main.go:141] libmachine: (ha-631834-m03) 
	I0927 00:38:01.309656   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4f:aa:cd in network default
	I0927 00:38:01.310171   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring networks are active...
	I0927 00:38:01.310187   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:01.310859   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring network default is active
	I0927 00:38:01.311183   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring network mk-ha-631834 is active
	I0927 00:38:01.311550   34022 main.go:141] libmachine: (ha-631834-m03) Getting domain xml...
	I0927 00:38:01.312351   34022 main.go:141] libmachine: (ha-631834-m03) Creating domain...
	I0927 00:38:02.542322   34022 main.go:141] libmachine: (ha-631834-m03) Waiting to get IP...
	I0927 00:38:02.542980   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:02.543377   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:02.543426   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:02.543365   34779 retry.go:31] will retry after 295.787312ms: waiting for machine to come up
	I0927 00:38:02.840874   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:02.841334   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:02.841363   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:02.841297   34779 retry.go:31] will retry after 248.489193ms: waiting for machine to come up
	I0927 00:38:03.091718   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:03.092118   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:03.092144   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:03.092091   34779 retry.go:31] will retry after 441.574448ms: waiting for machine to come up
	I0927 00:38:03.535897   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:03.536373   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:03.536426   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:03.536344   34779 retry.go:31] will retry after 516.671192ms: waiting for machine to come up
	I0927 00:38:04.054938   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:04.055415   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:04.055448   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:04.055376   34779 retry.go:31] will retry after 716.952406ms: waiting for machine to come up
	I0927 00:38:04.774184   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:04.774597   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:04.774626   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:04.774544   34779 retry.go:31] will retry after 932.879879ms: waiting for machine to come up
	I0927 00:38:05.710264   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:05.710744   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:05.710771   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:05.710689   34779 retry.go:31] will retry after 865.055707ms: waiting for machine to come up
	I0927 00:38:06.577372   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:06.577736   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:06.577763   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:06.577713   34779 retry.go:31] will retry after 1.070388843s: waiting for machine to come up
	I0927 00:38:07.649656   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:07.650114   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:07.650136   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:07.650079   34779 retry.go:31] will retry after 1.328681925s: waiting for machine to come up
	I0927 00:38:08.980362   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:08.980901   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:08.980930   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:08.980854   34779 retry.go:31] will retry after 1.891343357s: waiting for machine to come up
	I0927 00:38:10.874136   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:10.874597   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:10.874626   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:10.874547   34779 retry.go:31] will retry after 1.77968387s: waiting for machine to come up
	I0927 00:38:12.656346   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:12.656707   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:12.656734   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:12.656661   34779 retry.go:31] will retry after 2.690596335s: waiting for machine to come up
	I0927 00:38:15.349488   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:15.349902   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:15.349938   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:15.349838   34779 retry.go:31] will retry after 3.212522074s: waiting for machine to come up
	I0927 00:38:18.564307   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:18.564733   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:18.564759   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:18.564688   34779 retry.go:31] will retry after 5.536998184s: waiting for machine to come up
	I0927 00:38:24.105735   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.106267   34022 main.go:141] libmachine: (ha-631834-m03) Found IP for machine: 192.168.39.92
	I0927 00:38:24.106298   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has current primary IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.106307   34022 main.go:141] libmachine: (ha-631834-m03) Reserving static IP address...
	I0927 00:38:24.106789   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find host DHCP lease matching {name: "ha-631834-m03", mac: "52:54:00:4c:25:39", ip: "192.168.39.92"} in network mk-ha-631834
	I0927 00:38:24.178177   34022 main.go:141] libmachine: (ha-631834-m03) Reserved static IP address: 192.168.39.92
	I0927 00:38:24.178214   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Getting to WaitForSSH function...
	I0927 00:38:24.178222   34022 main.go:141] libmachine: (ha-631834-m03) Waiting for SSH to be available...
	I0927 00:38:24.180785   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.181172   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.181205   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.181352   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using SSH client type: external
	I0927 00:38:24.181375   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa (-rw-------)
	I0927 00:38:24.181402   34022 main.go:141] libmachine: (ha-631834-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:38:24.181416   34022 main.go:141] libmachine: (ha-631834-m03) DBG | About to run SSH command:
	I0927 00:38:24.181425   34022 main.go:141] libmachine: (ha-631834-m03) DBG | exit 0
	I0927 00:38:24.307152   34022 main.go:141] libmachine: (ha-631834-m03) DBG | SSH cmd err, output: <nil>: 
	I0927 00:38:24.307447   34022 main.go:141] libmachine: (ha-631834-m03) KVM machine creation complete!
	I0927 00:38:24.307763   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:24.308355   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:24.308580   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:24.308729   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:38:24.308741   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetState
	I0927 00:38:24.310053   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:38:24.310069   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:38:24.310082   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:38:24.310091   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.312140   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.312456   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.312481   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.312582   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.312762   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.312951   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.313095   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.313255   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.313466   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.313480   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:38:24.422933   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:38:24.422970   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:38:24.422980   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.426661   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.427100   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.427125   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.427318   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.427511   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.427638   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.427791   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.427987   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.428244   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.428263   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:38:24.540183   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:38:24.540244   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:38:24.540253   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:38:24.540261   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.540508   34022 buildroot.go:166] provisioning hostname "ha-631834-m03"
	I0927 00:38:24.540530   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.540689   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.543040   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.543414   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.543443   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.543611   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.543765   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.543907   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.544102   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.544311   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.544483   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.544499   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834-m03 && echo "ha-631834-m03" | sudo tee /etc/hostname
	I0927 00:38:24.670921   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834-m03
	
	I0927 00:38:24.670950   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.673565   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.673864   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.673890   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.674020   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.674183   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.674310   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.674419   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.674647   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.674798   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.674812   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:38:24.791979   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:38:24.792005   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:38:24.792027   34022 buildroot.go:174] setting up certificates
	I0927 00:38:24.792036   34022 provision.go:84] configureAuth start
	I0927 00:38:24.792044   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.792291   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:24.794829   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.795183   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.795216   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.795380   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.797351   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.797611   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.797635   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.797733   34022 provision.go:143] copyHostCerts
	I0927 00:38:24.797765   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:38:24.797804   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:38:24.797814   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:38:24.797876   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:38:24.797945   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:38:24.797964   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:38:24.797980   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:38:24.798015   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:38:24.798060   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:38:24.798079   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:38:24.798086   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:38:24.798115   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:38:24.798186   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834-m03 san=[127.0.0.1 192.168.39.92 ha-631834-m03 localhost minikube]
	I0927 00:38:24.887325   34022 provision.go:177] copyRemoteCerts
	I0927 00:38:24.887388   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:38:24.887417   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.889796   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.890201   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.890231   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.890378   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.890525   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.890673   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.890757   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:24.974577   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:38:24.974640   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:38:24.998800   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:38:24.998882   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:38:25.023015   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:38:25.023097   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 00:38:25.047091   34022 provision.go:87] duration metric: took 255.040854ms to configureAuth
	I0927 00:38:25.047129   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:38:25.047386   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:25.047470   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.050122   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.050450   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.050478   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.050639   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.050791   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.050936   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.051044   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.051180   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:25.051392   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:25.051410   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:38:25.271341   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:38:25.271367   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:38:25.271379   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetURL
	I0927 00:38:25.272505   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using libvirt version 6000000
	I0927 00:38:25.274516   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.274843   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.274868   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.275000   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:38:25.275010   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:38:25.275018   34022 client.go:171] duration metric: took 24.330841027s to LocalClient.Create
	I0927 00:38:25.275044   34022 start.go:167] duration metric: took 24.330903271s to libmachine.API.Create "ha-631834"
	I0927 00:38:25.275059   34022 start.go:293] postStartSetup for "ha-631834-m03" (driver="kvm2")
	I0927 00:38:25.275078   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:38:25.275102   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.275329   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:38:25.275358   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.277447   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.277789   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.277809   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.277981   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.278138   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.278294   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.278392   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.363118   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:38:25.367416   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:38:25.367440   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:38:25.367494   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:38:25.367565   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:38:25.367574   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:38:25.367651   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:38:25.377433   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:38:25.402022   34022 start.go:296] duration metric: took 126.949525ms for postStartSetup
	I0927 00:38:25.402069   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:25.402606   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:25.405298   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.405691   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.405718   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.406069   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:25.406300   34022 start.go:128] duration metric: took 24.480456335s to createHost
	I0927 00:38:25.406329   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.408691   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.409060   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.409076   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.409274   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.409443   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.409610   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.409745   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.409905   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:25.410111   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:25.410124   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:38:25.520084   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397505.498121645
	
	I0927 00:38:25.520105   34022 fix.go:216] guest clock: 1727397505.498121645
	I0927 00:38:25.520112   34022 fix.go:229] Guest: 2024-09-27 00:38:25.498121645 +0000 UTC Remote: 2024-09-27 00:38:25.406314622 +0000 UTC m=+144.706814205 (delta=91.807023ms)
	I0927 00:38:25.520126   34022 fix.go:200] guest clock delta is within tolerance: 91.807023ms
	I0927 00:38:25.520131   34022 start.go:83] releasing machines lock for "ha-631834-m03", held for 24.594409944s
	I0927 00:38:25.520153   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.520388   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:25.523018   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.523441   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.523469   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.525631   34022 out.go:177] * Found network options:
	I0927 00:38:25.527157   34022 out.go:177]   - NO_PROXY=192.168.39.4,192.168.39.184
	W0927 00:38:25.528442   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 00:38:25.528464   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:38:25.528477   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.528981   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.529153   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.529222   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:38:25.529262   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	W0927 00:38:25.529362   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 00:38:25.529390   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:38:25.529477   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:38:25.529503   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.532028   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532225   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532427   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.532453   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532602   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.532629   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.532655   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532783   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.532794   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.532975   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.532976   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.533132   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.533194   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.533378   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.772033   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:38:25.777746   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:38:25.777803   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:38:25.795383   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:38:25.795403   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:38:25.795486   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:38:25.812841   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:38:25.827240   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:38:25.827295   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:38:25.841149   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:38:25.855688   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:38:25.975549   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:38:26.132600   34022 docker.go:233] disabling docker service ...
	I0927 00:38:26.132671   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:38:26.147138   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:38:26.160283   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:38:26.280885   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:38:26.397744   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:38:26.412063   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:38:26.431067   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:38:26.431183   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.443586   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:38:26.443649   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.455922   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.466779   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.478101   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:38:26.489198   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.499613   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.517900   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.528412   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:38:26.537702   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:38:26.537761   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:38:26.550744   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:38:26.561809   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:26.685216   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:38:26.784033   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:38:26.784095   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:38:26.788971   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:38:26.789022   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:38:26.792579   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:38:26.834879   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:38:26.834941   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:38:26.863131   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:38:26.894968   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:38:26.896312   34022 out.go:177]   - env NO_PROXY=192.168.39.4
	I0927 00:38:26.897668   34022 out.go:177]   - env NO_PROXY=192.168.39.4,192.168.39.184
	I0927 00:38:26.898968   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:26.901618   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:26.901952   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:26.901974   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:26.902162   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:38:26.906490   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:38:26.920023   34022 mustload.go:65] Loading cluster: ha-631834
	I0927 00:38:26.920246   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:26.920507   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:26.920541   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:26.934985   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44565
	I0927 00:38:26.935403   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:26.935900   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:26.935918   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:26.936235   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:26.936414   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:38:26.937691   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:38:26.938068   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:26.938115   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:26.952338   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0927 00:38:26.952802   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:26.953261   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:26.953279   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:26.953560   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:26.953830   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:38:26.953987   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.92
	I0927 00:38:26.954001   34022 certs.go:194] generating shared ca certs ...
	I0927 00:38:26.954018   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:26.954172   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:38:26.954225   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:38:26.954237   34022 certs.go:256] generating profile certs ...
	I0927 00:38:26.954335   34022 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:38:26.954364   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea
	I0927 00:38:26.954384   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.184 192.168.39.92 192.168.39.254]
	I0927 00:38:27.144960   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea ...
	I0927 00:38:27.144988   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea: {Name:mk59d4f754d56457d5c6119e00c5a757fdf5824a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:27.145181   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea ...
	I0927 00:38:27.145196   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea: {Name:mkf2be3579ffd641dd346a6606b22a9fb2324402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:27.145291   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:38:27.145420   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:38:27.145538   34022 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:38:27.145552   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:38:27.145565   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:38:27.145577   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:38:27.145592   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:38:27.145605   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:38:27.145617   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:38:27.145628   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:38:27.163436   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:38:27.163551   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:38:27.163586   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:38:27.163596   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:38:27.163623   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:38:27.163645   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:38:27.163668   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:38:27.163704   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:38:27.163738   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.163752   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.163764   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.163800   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:38:27.166902   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:27.167258   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:38:27.167285   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:27.167436   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:38:27.167603   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:38:27.167715   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:38:27.167869   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:38:27.247589   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 00:38:27.254078   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 00:38:27.266588   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 00:38:27.270741   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 00:38:27.281840   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 00:38:27.286146   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 00:38:27.296457   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 00:38:27.300347   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0927 00:38:27.311070   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 00:38:27.316218   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 00:38:27.329482   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 00:38:27.338454   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 00:38:27.355258   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:38:27.382658   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:38:27.405893   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:38:27.428247   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:38:27.451705   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0927 00:38:27.476691   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:38:27.501660   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:38:27.524660   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:38:27.551018   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:38:27.574913   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:38:27.597697   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:38:27.619996   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 00:38:27.636789   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 00:38:27.653361   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 00:38:27.669541   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0927 00:38:27.686266   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 00:38:27.702940   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 00:38:27.720590   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 00:38:27.736937   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:38:27.742470   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:38:27.754273   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.758795   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.758847   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.764495   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:38:27.776262   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:38:27.787442   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.791854   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.791891   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.797397   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:38:27.808793   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:38:27.819765   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.823906   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.823953   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.829381   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:38:27.840376   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:38:27.844373   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:38:27.844420   34022 kubeadm.go:934] updating node {m03 192.168.39.92 8443 v1.31.1 crio true true} ...
	I0927 00:38:27.844516   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:38:27.844551   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:38:27.844579   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:38:27.862311   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:38:27.862375   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:38:27.862434   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:38:27.872781   34022 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 00:38:27.872832   34022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 00:38:27.882613   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0927 00:38:27.882653   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:27.882614   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0927 00:38:27.882718   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:38:27.882614   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 00:38:27.882757   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:38:27.882780   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:38:27.882851   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:38:27.898547   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 00:38:27.898582   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 00:38:27.898586   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:38:27.898611   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 00:38:27.898635   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 00:38:27.898671   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:38:27.928975   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 00:38:27.929019   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 00:38:28.755845   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 00:38:28.766166   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 00:38:28.784929   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:38:28.802956   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 00:38:28.819722   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:38:28.823558   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:38:28.836368   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:28.952315   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:38:28.969758   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:38:28.970098   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:28.970147   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:28.986122   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0927 00:38:28.986560   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:28.987020   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:28.987038   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:28.987386   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:28.987567   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:38:28.987723   34022 start.go:317] joinCluster: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:38:28.987854   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 00:38:28.987874   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:38:28.991221   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:28.991756   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:38:28.991779   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:28.991933   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:38:28.992065   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:38:28.992196   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:38:28.992330   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:38:29.166799   34022 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:29.166840   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nyp4wh.a7l7uv1svmghw4iw --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m03 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443"
	I0927 00:38:50.894049   34022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nyp4wh.a7l7uv1svmghw4iw --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m03 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443": (21.727186901s)
	I0927 00:38:50.894086   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 00:38:51.430363   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834-m03 minikube.k8s.io/updated_at=2024_09_27T00_38_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=false
	I0927 00:38:51.580467   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-631834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 00:38:51.702639   34022 start.go:319] duration metric: took 22.714914062s to joinCluster
	I0927 00:38:51.702703   34022 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:51.703011   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:51.703981   34022 out.go:177] * Verifying Kubernetes components...
	I0927 00:38:51.706308   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:51.993118   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:38:52.039442   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:38:52.039732   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 00:38:52.039793   34022 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0927 00:38:52.040085   34022 node_ready.go:35] waiting up to 6m0s for node "ha-631834-m03" to be "Ready" ...
	I0927 00:38:52.040186   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:52.040198   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:52.040211   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:52.040218   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:52.044122   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:52.540842   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:52.540865   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:52.540875   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:52.540880   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:52.544531   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:53.040343   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:53.040364   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:53.040376   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:53.040380   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:53.043889   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:53.540829   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:53.540853   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:53.540865   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:53.540871   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:53.544102   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:54.040457   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:54.040486   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:54.040498   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:54.040508   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:54.044080   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:54.044692   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:54.540544   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:54.540565   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:54.540577   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:54.540583   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:54.544108   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:55.040995   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:55.041014   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:55.041022   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:55.041026   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:55.044186   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:55.541131   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:55.541149   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:55.541155   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:55.541159   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:55.544421   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:56.040678   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:56.040699   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:56.040717   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:56.040724   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:56.044252   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:56.044964   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:56.540268   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:56.540298   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:56.540320   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:56.540326   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:56.544327   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:57.041238   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:57.041258   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:57.041266   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:57.041270   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:57.044588   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:57.541127   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:57.541150   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:57.541158   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:57.541162   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:57.545682   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:58.040341   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:58.040358   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:58.040365   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:58.040370   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:58.044102   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:58.541229   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:58.541250   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:58.541260   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:58.541266   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:58.545253   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:58.545941   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:59.040786   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:59.040810   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:59.040821   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:59.040826   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:59.044532   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:59.540476   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:59.540500   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:59.540512   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:59.540518   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:59.546237   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:00.040296   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:00.040324   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:00.040333   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:00.040340   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:00.043125   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:00.541170   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:00.541190   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:00.541199   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:00.541204   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:00.544199   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:01.041077   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:01.041108   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:01.041120   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:01.041128   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:01.044323   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:01.044952   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:01.540257   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:01.540278   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:01.540286   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:01.540290   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:01.543567   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:02.040508   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:02.040527   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:02.040534   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:02.040538   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:02.043399   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:02.540909   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:02.540930   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:02.540940   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:02.540944   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:02.544479   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.040484   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:03.040506   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:03.040516   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:03.040524   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:03.043891   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.540961   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:03.540985   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:03.540998   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:03.541004   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:03.544529   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.545350   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:04.041102   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:04.041123   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:04.041131   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:04.041135   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:04.046364   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:04.541106   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:04.541126   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:04.541134   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:04.541143   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:04.546084   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:05.040284   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:05.040305   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:05.040316   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:05.040321   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:05.044656   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:05.540520   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:05.540541   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:05.540549   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:05.540553   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:05.543933   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:06.040933   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:06.040960   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:06.040968   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:06.040972   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:06.044262   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:06.045234   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:06.540620   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:06.540642   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:06.540650   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:06.540655   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:06.543993   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:07.040742   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:07.040762   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:07.040769   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:07.040773   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:07.044207   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:07.541217   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:07.541238   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:07.541246   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:07.541250   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:07.544549   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:08.040522   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:08.040543   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:08.040551   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:08.040555   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:08.044379   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:08.540580   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:08.540599   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:08.540610   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:08.540614   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:08.543564   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:08.544141   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:09.041048   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:09.041080   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:09.041090   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:09.041096   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:09.044654   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:09.540899   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:09.540923   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:09.540933   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:09.540937   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:09.544281   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.040837   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:10.040856   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:10.040864   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:10.040868   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:10.044767   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.540532   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:10.540551   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:10.540558   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:10.540560   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:10.543816   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.544420   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:11.041033   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.041053   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.041062   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.041066   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.044226   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.044735   34022 node_ready.go:49] node "ha-631834-m03" has status "Ready":"True"
	I0927 00:39:11.044751   34022 node_ready.go:38] duration metric: took 19.004641333s for node "ha-631834-m03" to be "Ready" ...
	I0927 00:39:11.044759   34022 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:39:11.044826   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:11.044836   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.044843   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.044847   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.050350   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:11.057101   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.057173   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-479dv
	I0927 00:39:11.057179   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.057186   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.057192   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.059921   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.060545   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.060562   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.060568   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.060571   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.063003   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.063383   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.063397   34022 pod_ready.go:82] duration metric: took 6.275685ms for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.063405   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.063458   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kg8kf
	I0927 00:39:11.063466   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.063472   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.063477   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.065828   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.066447   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.066464   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.066475   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.066480   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.068743   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.069387   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.069408   34022 pod_ready.go:82] duration metric: took 5.996652ms for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.069420   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.069482   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834
	I0927 00:39:11.069493   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.069502   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.069510   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.071542   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.072035   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.072047   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.072054   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.072059   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.074524   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.075087   34022 pod_ready.go:93] pod "etcd-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.075106   34022 pod_ready.go:82] duration metric: took 5.678675ms for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.075115   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.075158   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m02
	I0927 00:39:11.075166   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.075172   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.075177   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.077457   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.078140   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:11.078155   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.078162   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.078166   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.080308   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.080796   34022 pod_ready.go:93] pod "etcd-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.080816   34022 pod_ready.go:82] duration metric: took 5.694556ms for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.080827   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.241112   34022 request.go:632] Waited for 160.229406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m03
	I0927 00:39:11.241190   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m03
	I0927 00:39:11.241202   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.241213   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.241221   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.244515   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.441468   34022 request.go:632] Waited for 196.217118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.441557   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.441564   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.441575   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.441580   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.445651   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:11.446311   34022 pod_ready.go:93] pod "etcd-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.446338   34022 pod_ready.go:82] duration metric: took 365.498163ms for pod "etcd-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.446361   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.641363   34022 request.go:632] Waited for 194.923565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:39:11.641498   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:39:11.641520   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.641531   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.641539   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.646049   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:11.841994   34022 request.go:632] Waited for 195.392366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.842046   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.842053   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.842060   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.842064   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.845122   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.845566   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.845583   34022 pod_ready.go:82] duration metric: took 399.214359ms for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.845596   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.041393   34022 request.go:632] Waited for 195.729881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:39:12.041458   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:39:12.041466   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.041478   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.041488   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.044854   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.241780   34022 request.go:632] Waited for 196.198597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:12.241855   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:12.241862   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.241870   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.241880   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.245475   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.246124   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:12.246146   34022 pod_ready.go:82] duration metric: took 400.543035ms for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.246162   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.441106   34022 request.go:632] Waited for 194.872848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m03
	I0927 00:39:12.441163   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m03
	I0927 00:39:12.441169   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.441177   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.441181   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.444679   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.641949   34022 request.go:632] Waited for 196.340732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:12.642006   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:12.642011   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.642019   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.642026   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.645583   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.646336   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:12.646359   34022 pod_ready.go:82] duration metric: took 400.189129ms for pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.646371   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.841500   34022 request.go:632] Waited for 195.047763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:39:12.841554   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:39:12.841559   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.841565   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.841570   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.844885   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.042011   34022 request.go:632] Waited for 196.365336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:13.042068   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:13.042075   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.042086   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.042094   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.045463   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.046083   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.046099   34022 pod_ready.go:82] duration metric: took 399.717332ms for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.046117   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.241273   34022 request.go:632] Waited for 195.079725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:39:13.241342   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:39:13.241350   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.241360   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.241371   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.244557   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.441283   34022 request.go:632] Waited for 196.073724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:13.441336   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:13.441342   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.441348   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.441353   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.444943   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.445609   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.445625   34022 pod_ready.go:82] duration metric: took 399.502321ms for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.445635   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.641730   34022 request.go:632] Waited for 196.022446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m03
	I0927 00:39:13.641795   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m03
	I0927 00:39:13.641804   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.641816   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.641825   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.645301   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.841195   34022 request.go:632] Waited for 195.27161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:13.841276   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:13.841286   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.841298   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.841306   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.844228   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:13.844820   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.844837   34022 pod_ready.go:82] duration metric: took 399.196459ms for pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.844849   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-22lcj" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.041259   34022 request.go:632] Waited for 196.353447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22lcj
	I0927 00:39:14.041346   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22lcj
	I0927 00:39:14.041361   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.041372   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.041381   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.044594   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.241701   34022 request.go:632] Waited for 196.342418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:14.241756   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:14.241771   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.241779   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.241786   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.244937   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.245574   34022 pod_ready.go:93] pod "kube-proxy-22lcj" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:14.245593   34022 pod_ready.go:82] duration metric: took 400.737693ms for pod "kube-proxy-22lcj" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.245602   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.441662   34022 request.go:632] Waited for 195.987258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:39:14.441711   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:39:14.441717   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.441723   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.441727   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.444886   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.642030   34022 request.go:632] Waited for 196.372014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:14.642111   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:14.642118   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.642125   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.642129   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.645645   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.646260   34022 pod_ready.go:93] pod "kube-proxy-7n244" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:14.646278   34022 pod_ready.go:82] duration metric: took 400.670776ms for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.646288   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.841368   34022 request.go:632] Waited for 195.014242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:39:14.841454   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:39:14.841463   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.841470   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.841478   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.844791   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.041743   34022 request.go:632] Waited for 196.305022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.041798   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.041803   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.041810   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.041816   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.045475   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.045878   34022 pod_ready.go:93] pod "kube-proxy-x2hvh" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.045893   34022 pod_ready.go:82] duration metric: took 399.599097ms for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.045902   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.242003   34022 request.go:632] Waited for 196.041536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:39:15.242079   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:39:15.242093   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.242103   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.242113   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.246380   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:15.441144   34022 request.go:632] Waited for 194.281274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:15.441219   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:15.441224   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.441235   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.441240   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.444769   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.445492   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.445508   34022 pod_ready.go:82] duration metric: took 399.601315ms for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.445517   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.641668   34022 request.go:632] Waited for 196.083523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:39:15.641741   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:39:15.641746   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.641753   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.641757   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.645029   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.841624   34022 request.go:632] Waited for 196.133411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.841705   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.841713   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.841721   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.841725   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.845075   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.845562   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.845579   34022 pod_ready.go:82] duration metric: took 400.056155ms for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.845590   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:16.041217   34022 request.go:632] Waited for 195.564347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m03
	I0927 00:39:16.041293   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m03
	I0927 00:39:16.041302   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.041310   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.041316   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.044981   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.241893   34022 request.go:632] Waited for 196.354511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:16.241965   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:16.241973   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.241981   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.241990   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.245440   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.245881   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:16.245900   34022 pod_ready.go:82] duration metric: took 400.302015ms for pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:16.245911   34022 pod_ready.go:39] duration metric: took 5.201141408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:39:16.245931   34022 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:39:16.245980   34022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:39:16.264448   34022 api_server.go:72] duration metric: took 24.561705447s to wait for apiserver process to appear ...
	I0927 00:39:16.264471   34022 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:39:16.264489   34022 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0927 00:39:16.270998   34022 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0927 00:39:16.271071   34022 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0927 00:39:16.271077   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.271087   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.271098   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.272010   34022 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0927 00:39:16.272079   34022 api_server.go:141] control plane version: v1.31.1
	I0927 00:39:16.272094   34022 api_server.go:131] duration metric: took 7.617636ms to wait for apiserver health ...
	I0927 00:39:16.272101   34022 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:39:16.441376   34022 request.go:632] Waited for 169.205133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.441450   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.441459   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.441467   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.441472   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.447163   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:16.454723   34022 system_pods.go:59] 24 kube-system pods found
	I0927 00:39:16.454748   34022 system_pods.go:61] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:39:16.454753   34022 system_pods.go:61] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:39:16.454757   34022 system_pods.go:61] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:39:16.454760   34022 system_pods.go:61] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:39:16.454763   34022 system_pods.go:61] "etcd-ha-631834-m03" [f0a5e835-8705-4555-8b6b-0c7147d76543] Running
	I0927 00:39:16.454767   34022 system_pods.go:61] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:39:16.454770   34022 system_pods.go:61] "kindnet-r2qxd" [68a590ef-4e98-409e-8ce3-4d4e3f14ccc1] Running
	I0927 00:39:16.454773   34022 system_pods.go:61] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:39:16.454776   34022 system_pods.go:61] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:39:16.454779   34022 system_pods.go:61] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:39:16.454782   34022 system_pods.go:61] "kube-apiserver-ha-631834-m03" [b5978123-4be5-4547-9f7a-17471dd88209] Running
	I0927 00:39:16.454786   34022 system_pods.go:61] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:39:16.454790   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:39:16.454793   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m03" [ff5ac84f-5b97-45f7-8bc4-0def81f1a9de] Running
	I0927 00:39:16.454797   34022 system_pods.go:61] "kube-proxy-22lcj" [0bd00be4-643a-41b0-ba0b-3a13f95a3b45] Running
	I0927 00:39:16.454800   34022 system_pods.go:61] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:39:16.454804   34022 system_pods.go:61] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:39:16.454807   34022 system_pods.go:61] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:39:16.454810   34022 system_pods.go:61] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:39:16.454813   34022 system_pods.go:61] "kube-scheduler-ha-631834-m03" [48ea6dc3-fa35-4c78-8f49-f6cc2797f433] Running
	I0927 00:39:16.454816   34022 system_pods.go:61] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:39:16.454819   34022 system_pods.go:61] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:39:16.454822   34022 system_pods.go:61] "kube-vip-ha-631834-m03" [0ffe3c65-482c-49ce-a209-94414f2958b5] Running
	I0927 00:39:16.454828   34022 system_pods.go:61] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:39:16.454833   34022 system_pods.go:74] duration metric: took 182.725605ms to wait for pod list to return data ...
	I0927 00:39:16.454840   34022 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:39:16.641200   34022 request.go:632] Waited for 186.296503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:39:16.641254   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:39:16.641261   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.641270   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.641279   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.644742   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.644853   34022 default_sa.go:45] found service account: "default"
	I0927 00:39:16.644867   34022 default_sa.go:55] duration metric: took 190.018813ms for default service account to be created ...
	I0927 00:39:16.644874   34022 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:39:16.841127   34022 request.go:632] Waited for 196.190225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.841217   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.841226   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.841234   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.841242   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.846111   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:16.853202   34022 system_pods.go:86] 24 kube-system pods found
	I0927 00:39:16.853229   34022 system_pods.go:89] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:39:16.853235   34022 system_pods.go:89] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:39:16.853239   34022 system_pods.go:89] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:39:16.853243   34022 system_pods.go:89] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:39:16.853246   34022 system_pods.go:89] "etcd-ha-631834-m03" [f0a5e835-8705-4555-8b6b-0c7147d76543] Running
	I0927 00:39:16.853249   34022 system_pods.go:89] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:39:16.853253   34022 system_pods.go:89] "kindnet-r2qxd" [68a590ef-4e98-409e-8ce3-4d4e3f14ccc1] Running
	I0927 00:39:16.853256   34022 system_pods.go:89] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:39:16.853260   34022 system_pods.go:89] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:39:16.853263   34022 system_pods.go:89] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:39:16.853266   34022 system_pods.go:89] "kube-apiserver-ha-631834-m03" [b5978123-4be5-4547-9f7a-17471dd88209] Running
	I0927 00:39:16.853269   34022 system_pods.go:89] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:39:16.853273   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:39:16.853276   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m03" [ff5ac84f-5b97-45f7-8bc4-0def81f1a9de] Running
	I0927 00:39:16.853280   34022 system_pods.go:89] "kube-proxy-22lcj" [0bd00be4-643a-41b0-ba0b-3a13f95a3b45] Running
	I0927 00:39:16.853285   34022 system_pods.go:89] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:39:16.853288   34022 system_pods.go:89] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:39:16.853291   34022 system_pods.go:89] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:39:16.853297   34022 system_pods.go:89] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:39:16.853302   34022 system_pods.go:89] "kube-scheduler-ha-631834-m03" [48ea6dc3-fa35-4c78-8f49-f6cc2797f433] Running
	I0927 00:39:16.853305   34022 system_pods.go:89] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:39:16.853308   34022 system_pods.go:89] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:39:16.853311   34022 system_pods.go:89] "kube-vip-ha-631834-m03" [0ffe3c65-482c-49ce-a209-94414f2958b5] Running
	I0927 00:39:16.853314   34022 system_pods.go:89] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:39:16.853321   34022 system_pods.go:126] duration metric: took 208.44194ms to wait for k8s-apps to be running ...
	I0927 00:39:16.853329   34022 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:39:16.853371   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:39:16.870246   34022 system_svc.go:56] duration metric: took 16.907091ms WaitForService to wait for kubelet
	I0927 00:39:16.870275   34022 kubeadm.go:582] duration metric: took 25.167539771s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:39:16.870292   34022 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:39:17.041388   34022 request.go:632] Waited for 171.008016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0927 00:39:17.041444   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0927 00:39:17.041452   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:17.041462   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:17.041467   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:17.045727   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:17.046668   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046684   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046709   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046713   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046717   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046720   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046725   34022 node_conditions.go:105] duration metric: took 176.429276ms to run NodePressure ...
	I0927 00:39:17.046735   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:39:17.046755   34022 start.go:255] writing updated cluster config ...
	I0927 00:39:17.047027   34022 ssh_runner.go:195] Run: rm -f paused
	I0927 00:39:17.097240   34022 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:39:17.099385   34022 out.go:177] * Done! kubectl is now configured to use "ha-631834" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.023595650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=177055da-7f26-46a7-9e65-6770fb30256d name=/runtime.v1.RuntimeService/Version
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.024671959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f806a6d-8293-4543-87bf-5b8ddbf14cec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.025123824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397776025101108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f806a6d-8293-4543-87bf-5b8ddbf14cec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.025707596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2a04b22-1c6d-404a-a0c0-cfd7e1657681 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.025760338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2a04b22-1c6d-404a-a0c0-cfd7e1657681 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.026008800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2a04b22-1c6d-404a-a0c0-cfd7e1657681 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.044755180Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e9def94c-63fa-414a-929d-3166cb0083df name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.045038011Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hczmj,Uid:55e4dd58-9193-49ba-a2e8-1c6835898fb1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397558330820881,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:39:18.015402395Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-479dv,Uid:ee318b64-2274-4106-93ed-9f62151107f1,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1727397416284003471,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.971385863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:dbafe551-2645-4016-83f6-1133824d926d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397416280773776,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T00:36:55.969309352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kg8kf,Uid:ee98faac-e03c-427f-9a78-2cf06d2f85cf,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727397416265889136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.959296032Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&PodSandboxMetadata{Name:kindnet-l6ncl,Uid:3861149b-7c67-4d48-9d24-8fa08aefda61,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397403804322011,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.462190063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&PodSandboxMetadata{Name:kube-proxy-7n244,Uid:d9fac118-1b31-4cf3-bc21-a4536e45a511,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397403803732849,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.473610313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&PodSandboxMetadata{Name:etcd-ha-631834,Uid:2a32cc8b63ea212ed38709daf6762cc1,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1727397392159704302,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 2a32cc8b63ea212ed38709daf6762cc1,kubernetes.io/config.seen: 2024-09-27T00:36:31.631709370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-631834,Uid:71a28d11a5db44bbf2777b262efa1514,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397392156637222,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 71a28d11a5db44bbf2777b262efa1514,kubernetes.io/config.seen: 2024-09-27T00:36:31.631711688Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-631834,Uid:10057dece9752ed428ddf4bfd465bb3d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397392123638188,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10057dece9752ed428ddf4bfd465bb3d,kubernetes.io/config.seen: 2024-09-27T00:36:31.631712772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:710e2b00db1780a3cb652f
ad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-631834,Uid:e3f83edb960a7290e67f3d1729807ccd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397392115397331,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{kubernetes.io/config.hash: e3f83edb960a7290e67f3d1729807ccd,kubernetes.io/config.seen: 2024-09-27T00:36:31.631706084Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-631834,Uid:afee14d1206143c4d719c111467c379b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397392111883552,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.4:8443,kubernetes.io/config.hash: afee14d1206143c4d719c111467c379b,kubernetes.io/config.seen: 2024-09-27T00:36:31.631710672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e9def94c-63fa-414a-929d-3166cb0083df name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.045809279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e53c6e3-3400-40ee-9810-8026991bc782 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.045864988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e53c6e3-3400-40ee-9810-8026991bc782 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.046116189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e53c6e3-3400-40ee-9810-8026991bc782 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.068164248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fda8ed93-4779-4a43-8e8a-21b7a832f69b name=/runtime.v1.RuntimeService/Version
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.068307607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fda8ed93-4779-4a43-8e8a-21b7a832f69b name=/runtime.v1.RuntimeService/Version
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.069770781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a7ff41f-2f57-4b7c-b53e-e6d64f27244e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.070199658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397776070176895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a7ff41f-2f57-4b7c-b53e-e6d64f27244e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.072064296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63200bbe-6f74-4d0e-9ac8-176d49881dc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.072132459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63200bbe-6f74-4d0e-9ac8-176d49881dc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.072424401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63200bbe-6f74-4d0e-9ac8-176d49881dc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.111023861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d8161d0-8aa5-4056-8be7-830e54aaf20f name=/runtime.v1.RuntimeService/Version
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.111353605Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d8161d0-8aa5-4056-8be7-830e54aaf20f name=/runtime.v1.RuntimeService/Version
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.112980917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ea78567-be30-47ea-8a08-5fa24214dc28 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.113434763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397776113414566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ea78567-be30-47ea-8a08-5fa24214dc28 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.113854481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8df2cc6b-ffa5-40d1-a030-f3a3af650ef6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.113906108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8df2cc6b-ffa5-40d1-a030-f3a3af650ef6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:42:56 ha-631834 crio[661]: time="2024-09-27 00:42:56.114139765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8df2cc6b-ffa5-40d1-a030-f3a3af650ef6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74dc20e31bc6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ebc71356fe886       busybox-7dff88458-hczmj
	f0d4e929a59ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   2cb3143c36c8e       coredns-7c65d6cfc9-479dv
	3c06ebd9099a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   8f236d02ca028       coredns-7c65d6cfc9-kg8kf
	a9f2637b4124e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   399bb953593cc       storage-provisioner
	805b55d391308       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   7e2d35a1098a1       kindnet-l6ncl
	182f24ac501b7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   c0f5b32248925       kube-proxy-7n244
	555c7e8f6d518       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   710e2b00db178       kube-vip-ha-631834
	536c1c26f6d72       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   de8c10edafaa7       etcd-ha-631834
	5c88792788fc2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   74609d9fcf5f5       kube-scheduler-ha-631834
	aa717868fa66e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   4a215208b0ed2       kube-controller-manager-ha-631834
	5dcaba50a39a2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   8e73f2182b892       kube-apiserver-ha-631834
	
	
	==> coredns [3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930] <==
	[INFO] 10.244.1.2:33318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158302s
	[INFO] 10.244.1.2:38992 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210731s
	[INFO] 10.244.1.2:33288 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154244s
	[INFO] 10.244.2.2:52842 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181224s
	[INFO] 10.244.2.2:39802 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001542919s
	[INFO] 10.244.2.2:47825 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115718s
	[INFO] 10.244.2.2:38071 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153076s
	[INFO] 10.244.0.4:46433 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001871874s
	[INFO] 10.244.0.4:34697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054557s
	[INFO] 10.244.1.2:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014886s
	[INFO] 10.244.2.2:34064 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136896s
	[INFO] 10.244.0.4:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149012s
	[INFO] 10.244.0.4:40833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014405s
	[INFO] 10.244.0.4:44560 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077158s
	[INFO] 10.244.0.4:46143 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000171018s
	[INFO] 10.244.1.2:56595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249758s
	[INFO] 10.244.1.2:34731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198874s
	[INFO] 10.244.1.2:47614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132758s
	[INFO] 10.244.1.2:36248 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015406s
	[INFO] 10.244.2.2:34744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136863s
	[INFO] 10.244.2.2:34972 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094616s
	[INFO] 10.244.2.2:52746 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078955s
	[INFO] 10.244.0.4:39419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113274s
	[INFO] 10.244.0.4:59554 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106105s
	[INFO] 10.244.0.4:39476 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054775s
	
	
	==> coredns [f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427] <==
	[INFO] 10.244.0.4:52853 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001421962s
	[INFO] 10.244.0.4:51515 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000078302s
	[INFO] 10.244.1.2:35739 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003265682s
	[INFO] 10.244.1.2:48683 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000243904s
	[INFO] 10.244.1.2:60448 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000155544s
	[INFO] 10.244.1.2:49238 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002742907s
	[INFO] 10.244.1.2:42211 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125195s
	[INFO] 10.244.2.2:33655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213093s
	[INFO] 10.244.2.2:58995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171984s
	[INFO] 10.244.2.2:39964 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.2.2:60456 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227691s
	[INFO] 10.244.0.4:44954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086981s
	[INFO] 10.244.0.4:47547 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166142s
	[INFO] 10.244.0.4:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214916s
	[INFO] 10.244.0.4:52871 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284904s
	[INFO] 10.244.0.4:55577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216348s
	[INFO] 10.244.0.4:39280 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003939s
	[INFO] 10.244.1.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133643s
	[INFO] 10.244.1.2:60581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156682s
	[INFO] 10.244.1.2:47815 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000931s
	[INFO] 10.244.2.2:51419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149958s
	[INFO] 10.244.2.2:54004 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114296s
	[INFO] 10.244.2.2:50685 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087762s
	[INFO] 10.244.2.2:42257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189679s
	[INFO] 10.244.0.4:51433 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015471s
	
	
	==> describe nodes <==
	Name:               ha-631834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_36_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:42:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-631834
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c835097a3f3f47119274822a90643a61
	  System UUID:                c835097a-3f3f-4711-9274-822a90643a61
	  Boot ID:                    773a1f71-cccf-4b35-8274-d80167988c3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hczmj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-7c65d6cfc9-479dv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 coredns-7c65d6cfc9-kg8kf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 etcd-ha-631834                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m18s
	  kube-system                 kindnet-l6ncl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m13s
	  kube-system                 kube-apiserver-ha-631834             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-ha-631834    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-proxy-7n244                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-ha-631834             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-vip-ha-631834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m11s  kube-proxy       
	  Normal  Starting                 6m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m18s  kubelet          Node ha-631834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s  kubelet          Node ha-631834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s  kubelet          Node ha-631834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal  NodeReady                6m1s   kubelet          Node ha-631834 status is now: NodeReady
	  Normal  RegisteredNode           5m13s  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	
	
	Name:               ha-631834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_37_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:37:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:40:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-631834-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 949992430050476bb475912d3f8b70cc
	  System UUID:                94999243-0050-476b-b475-912d3f8b70cc
	  Boot ID:                    53eb24e2-e661-44e8-b798-be320838fb5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bkws6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-631834-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-x7kr9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-631834-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-631834-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-proxy-x2hvh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-631834-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-631834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-631834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-631834-m02 status is now: NodeNotReady
	
	
	Name:               ha-631834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_38_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:38:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:39:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-631834-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a890346e739943359cb952ef92382de4
	  System UUID:                a890346e-7399-4335-9cb9-52ef92382de4
	  Boot ID:                    8ca25526-4cfd-4aaa-ab8a-4e67ba42c0bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dhthf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-631834-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-r2qxd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-631834-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-ha-631834-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-22lcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-631834-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-vip-ha-631834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-631834-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m8s                 node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	
	
	Name:               ha-631834-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_39_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:39:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:42:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-631834-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d5a4987d2674227bf93c72f5a77697a
	  System UUID:                7d5a4987-d267-4227-bf93-c72f5a77697a
	  Boot ID:                    8a8b1cc4-fbfe-41cb-b018-a0e1cc80311a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-667b4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-klfbb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m2s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m2s)  kubelet          Node ha-631834-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m2s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m                   node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-631834-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep27 00:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050412] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.794291] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.536823] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.593813] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.987708] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.063056] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056033] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.197880] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.118226] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.294623] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.981056] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.053805] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.059938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.871905] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.091402] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.727187] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.324064] kauditd_printk_skb: 41 callbacks suppressed
	[Sep27 00:37] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3] <==
	{"level":"warn","ts":"2024-09-27T00:42:56.135842Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"bff0a92d56623d2","rtt":"952.106µs","error":"dial tcp 192.168.39.184:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-27T00:42:56.135946Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"bff0a92d56623d2","rtt":"8.575329ms","error":"dial tcp 192.168.39.184:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-27T00:42:56.409975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.414331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.428175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.428455Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.436654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.441078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.444616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.447882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.451437Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.457645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.464313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.470857Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.474272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.477720Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.483435Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.493085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.499322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.507895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.511350Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.514841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.520169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.528355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:42:56.531631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:42:56 up 6 min,  0 users,  load average: 0.10, 0.24, 0.14
	Linux ha-631834 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327] <==
	I0927 00:42:25.603090       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:35.601340       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:42:35.601467       1 main.go:299] handling current node
	I0927 00:42:35.601518       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:42:35.601536       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:42:35.601669       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:42:35.601702       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:42:35.601776       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:42:35.601795       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:45.594144       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:42:45.594344       1 main.go:299] handling current node
	I0927 00:42:45.594373       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:42:45.594393       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:42:45.594565       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:42:45.594590       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:42:45.594654       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:42:45.594673       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:55.603184       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:42:55.603559       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:42:55.603878       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:42:55.604117       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:42:55.604402       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:42:55.605203       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:55.605426       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:42:55.605486       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db] <==
	W0927 00:36:37.440538       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0927 00:36:37.441493       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:36:37.445496       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 00:36:37.662456       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 00:36:38.560626       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 00:36:38.578403       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 00:36:38.587470       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 00:36:43.266579       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 00:36:43.419243       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0927 00:39:23.576104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42282: use of closed network connection
	E0927 00:39:23.771378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42288: use of closed network connection
	E0927 00:39:23.958682       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42312: use of closed network connection
	E0927 00:39:24.143404       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42328: use of closed network connection
	E0927 00:39:24.321615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42334: use of closed network connection
	E0927 00:39:24.507069       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42338: use of closed network connection
	E0927 00:39:24.675789       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42344: use of closed network connection
	E0927 00:39:24.862695       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42368: use of closed network connection
	E0927 00:39:25.041111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42388: use of closed network connection
	E0927 00:39:25.329470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42408: use of closed network connection
	E0927 00:39:25.500386       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42428: use of closed network connection
	E0927 00:39:25.675043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42456: use of closed network connection
	E0927 00:39:25.857940       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42472: use of closed network connection
	E0927 00:39:26.048116       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42494: use of closed network connection
	E0927 00:39:26.224537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42512: use of closed network connection
	W0927 00:40:47.323187       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4 192.168.39.92]
	
	
	==> kube-controller-manager [aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e] <==
	I0927 00:39:55.139474       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-631834-m04" podCIDRs=["10.244.3.0/24"]
	I0927 00:39:55.139580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.139638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.151590       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.487083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.877769       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:56.804153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:57.666169       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-631834-m04"
	I0927 00:39:57.666534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:57.746088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:58.632655       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:58.726762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:05.284426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:15.865636       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-631834-m04"
	I0927 00:40:15.865833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:15.879964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:16.781479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:25.730749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:41:11.808076       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-631834-m04"
	I0927 00:41:11.809299       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:11.832517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:11.890510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.873766ms"
	I0927 00:41:11.890734       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.505µs"
	I0927 00:41:12.743419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:17.028342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	
	
	==> kube-proxy [182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:36:44.513192       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:36:44.529245       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E0927 00:36:44.529395       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:36:44.637324       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:36:44.637425       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:36:44.637464       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:36:44.640935       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:36:44.641713       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:36:44.641798       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:36:44.643999       1 config.go:199] "Starting service config controller"
	I0927 00:36:44.644892       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:36:44.645302       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:36:44.645338       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:36:44.648337       1 config.go:328] "Starting node config controller"
	I0927 00:36:44.650849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:36:44.748412       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:36:44.748475       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:36:44.752495       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac] <==
	W0927 00:36:35.715895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:36:35.716591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.715936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:36:35.718435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.715973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:36:35.718562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.719580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:35.719853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.589565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:36.589679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.648438       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:36:36.648499       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:36:36.655529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:36:36.655821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.677521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:36:36.677870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.687963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:36.688163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.985650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:36:36.985711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0927 00:36:38.790470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 00:39:55.242771       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	E0927 00:39:55.242960       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 583b6ea7-5b96-43a8-9f06-70c031554c0e(kube-system/kindnet-7gjcd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7gjcd"
	E0927 00:39:55.243000       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" pod="kube-system/kindnet-7gjcd"
	I0927 00:39:55.243040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	
	
	==> kubelet <==
	Sep 27 00:41:38 ha-631834 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:41:38 ha-631834 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:41:38 ha-631834 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:41:38 ha-631834 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:41:38 ha-631834 kubelet[1309]: E0927 00:41:38.620020    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397698619762113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:38 ha-631834 kubelet[1309]: E0927 00:41:38.620049    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397698619762113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:48 ha-631834 kubelet[1309]: E0927 00:41:48.622830    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397708621937313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:48 ha-631834 kubelet[1309]: E0927 00:41:48.622875    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397708621937313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:58 ha-631834 kubelet[1309]: E0927 00:41:58.624102    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397718623839780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:58 ha-631834 kubelet[1309]: E0927 00:41:58.624145    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397718623839780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:08 ha-631834 kubelet[1309]: E0927 00:42:08.626464    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397728626075698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:08 ha-631834 kubelet[1309]: E0927 00:42:08.626520    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397728626075698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:18 ha-631834 kubelet[1309]: E0927 00:42:18.630268    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397738629150202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:18 ha-631834 kubelet[1309]: E0927 00:42:18.630612    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397738629150202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:28 ha-631834 kubelet[1309]: E0927 00:42:28.632510    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397748632150911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:28 ha-631834 kubelet[1309]: E0927 00:42:28.632817    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397748632150911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.503597    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:42:38 ha-631834 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.634672    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397758634392335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.634711    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397758634392335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:48 ha-631834 kubelet[1309]: E0927 00:42:48.636173    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397768635813162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:48 ha-631834 kubelet[1309]: E0927 00:42:48.636541    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397768635813162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-631834 -n ha-631834
helpers_test.go:261: (dbg) Run:  kubectl --context ha-631834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.400042337s)
ha_test.go:413: expected profile "ha-631834" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-631834\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-631834\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-631834\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.4\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.184\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.92\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.79\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"me
tallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":2
62144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-631834 -n ha-631834
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 logs -n 25
E0927 00:43:01.244972   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 logs -n 25: (1.321522885s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834:/home/docker/cp-test_ha-631834-m03_ha-631834.txt                      |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834 sudo cat                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834.txt                                |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m04 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp testdata/cp-test.txt                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834:/home/docker/cp-test_ha-631834-m04_ha-631834.txt                      |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834 sudo cat                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834.txt                                |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03:/home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m03 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-631834 node stop m02 -v=7                                                    | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:36:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:36:00.733270   34022 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:36:00.733561   34022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:00.733572   34022 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:00.733578   34022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:00.733765   34022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:36:00.734369   34022 out.go:352] Setting JSON to false
	I0927 00:36:00.735232   34022 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4706,"bootTime":1727392655,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:36:00.735334   34022 start.go:139] virtualization: kvm guest
	I0927 00:36:00.737562   34022 out.go:177] * [ha-631834] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:36:00.738940   34022 notify.go:220] Checking for updates...
	I0927 00:36:00.738971   34022 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:36:00.740322   34022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:36:00.741556   34022 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:00.742777   34022 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:00.744101   34022 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:36:00.745418   34022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:36:00.746900   34022 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:36:00.781665   34022 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 00:36:00.782952   34022 start.go:297] selected driver: kvm2
	I0927 00:36:00.782969   34022 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:36:00.782989   34022 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:36:00.784037   34022 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:36:00.784159   34022 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:36:00.799229   34022 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:36:00.799294   34022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:36:00.799639   34022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:36:00.799677   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:00.799725   34022 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0927 00:36:00.799740   34022 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:36:00.799811   34022 start.go:340] cluster config:
	{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0927 00:36:00.799933   34022 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:36:00.801666   34022 out.go:177] * Starting "ha-631834" primary control-plane node in "ha-631834" cluster
	I0927 00:36:00.802817   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:00.802860   34022 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:36:00.802872   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:36:00.802951   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:36:00.802964   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:36:00.803416   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:00.803442   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json: {Name:mk6367ac20858a15eb53ac7fa5c4186f9176d965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:00.803588   34022 start.go:360] acquireMachinesLock for ha-631834: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:36:00.803621   34022 start.go:364] duration metric: took 19.585µs to acquireMachinesLock for "ha-631834"
	I0927 00:36:00.803641   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:00.803696   34022 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 00:36:00.805235   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:36:00.805379   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:00.805413   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:00.819286   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I0927 00:36:00.819786   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:00.820338   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:00.820363   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:00.820724   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:00.820928   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:00.821048   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:00.821188   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:36:00.821209   34022 client.go:168] LocalClient.Create starting
	I0927 00:36:00.821241   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:36:00.821269   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:00.821289   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:00.821354   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:36:00.821378   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:00.821391   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:00.821430   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:36:00.821441   34022 main.go:141] libmachine: (ha-631834) Calling .PreCreateCheck
	I0927 00:36:00.821748   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:00.822055   34022 main.go:141] libmachine: Creating machine...
	I0927 00:36:00.822066   34022 main.go:141] libmachine: (ha-631834) Calling .Create
	I0927 00:36:00.822200   34022 main.go:141] libmachine: (ha-631834) Creating KVM machine...
	I0927 00:36:00.823422   34022 main.go:141] libmachine: (ha-631834) DBG | found existing default KVM network
	I0927 00:36:00.824110   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:00.823958   34045 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000122e20}
	I0927 00:36:00.824171   34022 main.go:141] libmachine: (ha-631834) DBG | created network xml: 
	I0927 00:36:00.824189   34022 main.go:141] libmachine: (ha-631834) DBG | <network>
	I0927 00:36:00.824198   34022 main.go:141] libmachine: (ha-631834) DBG |   <name>mk-ha-631834</name>
	I0927 00:36:00.824206   34022 main.go:141] libmachine: (ha-631834) DBG |   <dns enable='no'/>
	I0927 00:36:00.824216   34022 main.go:141] libmachine: (ha-631834) DBG |   
	I0927 00:36:00.824223   34022 main.go:141] libmachine: (ha-631834) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 00:36:00.824229   34022 main.go:141] libmachine: (ha-631834) DBG |     <dhcp>
	I0927 00:36:00.824234   34022 main.go:141] libmachine: (ha-631834) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 00:36:00.824245   34022 main.go:141] libmachine: (ha-631834) DBG |     </dhcp>
	I0927 00:36:00.824249   34022 main.go:141] libmachine: (ha-631834) DBG |   </ip>
	I0927 00:36:00.824253   34022 main.go:141] libmachine: (ha-631834) DBG |   
	I0927 00:36:00.824262   34022 main.go:141] libmachine: (ha-631834) DBG | </network>
	I0927 00:36:00.824270   34022 main.go:141] libmachine: (ha-631834) DBG | 
	I0927 00:36:00.829058   34022 main.go:141] libmachine: (ha-631834) DBG | trying to create private KVM network mk-ha-631834 192.168.39.0/24...
	I0927 00:36:00.893473   34022 main.go:141] libmachine: (ha-631834) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 ...
	I0927 00:36:00.893502   34022 main.go:141] libmachine: (ha-631834) DBG | private KVM network mk-ha-631834 192.168.39.0/24 created
	I0927 00:36:00.893514   34022 main.go:141] libmachine: (ha-631834) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:36:00.893569   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:00.893424   34045 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:00.893608   34022 main.go:141] libmachine: (ha-631834) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:36:01.131795   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.131690   34045 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa...
	I0927 00:36:01.270727   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.270595   34045 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/ha-631834.rawdisk...
	I0927 00:36:01.270761   34022 main.go:141] libmachine: (ha-631834) DBG | Writing magic tar header
	I0927 00:36:01.270787   34022 main.go:141] libmachine: (ha-631834) DBG | Writing SSH key tar header
	I0927 00:36:01.270801   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.270770   34045 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 ...
	I0927 00:36:01.270904   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834
	I0927 00:36:01.270938   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 (perms=drwx------)
	I0927 00:36:01.270949   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:36:01.270966   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:01.270976   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:36:01.270986   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:36:01.270995   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:36:01.271007   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home
	I0927 00:36:01.271032   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:36:01.271042   34022 main.go:141] libmachine: (ha-631834) DBG | Skipping /home - not owner
	I0927 00:36:01.271059   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:36:01.271072   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:36:01.271090   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:36:01.271101   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:36:01.271119   34022 main.go:141] libmachine: (ha-631834) Creating domain...
	I0927 00:36:01.272173   34022 main.go:141] libmachine: (ha-631834) define libvirt domain using xml: 
	I0927 00:36:01.272191   34022 main.go:141] libmachine: (ha-631834) <domain type='kvm'>
	I0927 00:36:01.272198   34022 main.go:141] libmachine: (ha-631834)   <name>ha-631834</name>
	I0927 00:36:01.272206   34022 main.go:141] libmachine: (ha-631834)   <memory unit='MiB'>2200</memory>
	I0927 00:36:01.272211   34022 main.go:141] libmachine: (ha-631834)   <vcpu>2</vcpu>
	I0927 00:36:01.272217   34022 main.go:141] libmachine: (ha-631834)   <features>
	I0927 00:36:01.272224   34022 main.go:141] libmachine: (ha-631834)     <acpi/>
	I0927 00:36:01.272235   34022 main.go:141] libmachine: (ha-631834)     <apic/>
	I0927 00:36:01.272246   34022 main.go:141] libmachine: (ha-631834)     <pae/>
	I0927 00:36:01.272256   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272263   34022 main.go:141] libmachine: (ha-631834)   </features>
	I0927 00:36:01.272282   34022 main.go:141] libmachine: (ha-631834)   <cpu mode='host-passthrough'>
	I0927 00:36:01.272289   34022 main.go:141] libmachine: (ha-631834)   
	I0927 00:36:01.272293   34022 main.go:141] libmachine: (ha-631834)   </cpu>
	I0927 00:36:01.272297   34022 main.go:141] libmachine: (ha-631834)   <os>
	I0927 00:36:01.272301   34022 main.go:141] libmachine: (ha-631834)     <type>hvm</type>
	I0927 00:36:01.272307   34022 main.go:141] libmachine: (ha-631834)     <boot dev='cdrom'/>
	I0927 00:36:01.272319   34022 main.go:141] libmachine: (ha-631834)     <boot dev='hd'/>
	I0927 00:36:01.272332   34022 main.go:141] libmachine: (ha-631834)     <bootmenu enable='no'/>
	I0927 00:36:01.272343   34022 main.go:141] libmachine: (ha-631834)   </os>
	I0927 00:36:01.272353   34022 main.go:141] libmachine: (ha-631834)   <devices>
	I0927 00:36:01.272363   34022 main.go:141] libmachine: (ha-631834)     <disk type='file' device='cdrom'>
	I0927 00:36:01.272378   34022 main.go:141] libmachine: (ha-631834)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/boot2docker.iso'/>
	I0927 00:36:01.272388   34022 main.go:141] libmachine: (ha-631834)       <target dev='hdc' bus='scsi'/>
	I0927 00:36:01.272453   34022 main.go:141] libmachine: (ha-631834)       <readonly/>
	I0927 00:36:01.272477   34022 main.go:141] libmachine: (ha-631834)     </disk>
	I0927 00:36:01.272488   34022 main.go:141] libmachine: (ha-631834)     <disk type='file' device='disk'>
	I0927 00:36:01.272497   34022 main.go:141] libmachine: (ha-631834)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:36:01.272515   34022 main.go:141] libmachine: (ha-631834)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/ha-631834.rawdisk'/>
	I0927 00:36:01.272530   34022 main.go:141] libmachine: (ha-631834)       <target dev='hda' bus='virtio'/>
	I0927 00:36:01.272545   34022 main.go:141] libmachine: (ha-631834)     </disk>
	I0927 00:36:01.272560   34022 main.go:141] libmachine: (ha-631834)     <interface type='network'>
	I0927 00:36:01.272569   34022 main.go:141] libmachine: (ha-631834)       <source network='mk-ha-631834'/>
	I0927 00:36:01.272578   34022 main.go:141] libmachine: (ha-631834)       <model type='virtio'/>
	I0927 00:36:01.272589   34022 main.go:141] libmachine: (ha-631834)     </interface>
	I0927 00:36:01.272599   34022 main.go:141] libmachine: (ha-631834)     <interface type='network'>
	I0927 00:36:01.272607   34022 main.go:141] libmachine: (ha-631834)       <source network='default'/>
	I0927 00:36:01.272617   34022 main.go:141] libmachine: (ha-631834)       <model type='virtio'/>
	I0927 00:36:01.272638   34022 main.go:141] libmachine: (ha-631834)     </interface>
	I0927 00:36:01.272657   34022 main.go:141] libmachine: (ha-631834)     <serial type='pty'>
	I0927 00:36:01.272670   34022 main.go:141] libmachine: (ha-631834)       <target port='0'/>
	I0927 00:36:01.272680   34022 main.go:141] libmachine: (ha-631834)     </serial>
	I0927 00:36:01.272689   34022 main.go:141] libmachine: (ha-631834)     <console type='pty'>
	I0927 00:36:01.272711   34022 main.go:141] libmachine: (ha-631834)       <target type='serial' port='0'/>
	I0927 00:36:01.272724   34022 main.go:141] libmachine: (ha-631834)     </console>
	I0927 00:36:01.272736   34022 main.go:141] libmachine: (ha-631834)     <rng model='virtio'>
	I0927 00:36:01.272748   34022 main.go:141] libmachine: (ha-631834)       <backend model='random'>/dev/random</backend>
	I0927 00:36:01.272758   34022 main.go:141] libmachine: (ha-631834)     </rng>
	I0927 00:36:01.272767   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272773   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272784   34022 main.go:141] libmachine: (ha-631834)   </devices>
	I0927 00:36:01.272793   34022 main.go:141] libmachine: (ha-631834) </domain>
	I0927 00:36:01.272813   34022 main.go:141] libmachine: (ha-631834) 
	I0927 00:36:01.276563   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:8c:cf:67 in network default
	I0927 00:36:01.277046   34022 main.go:141] libmachine: (ha-631834) Ensuring networks are active...
	I0927 00:36:01.277065   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:01.277664   34022 main.go:141] libmachine: (ha-631834) Ensuring network default is active
	I0927 00:36:01.277924   34022 main.go:141] libmachine: (ha-631834) Ensuring network mk-ha-631834 is active
	I0927 00:36:01.278421   34022 main.go:141] libmachine: (ha-631834) Getting domain xml...
	I0927 00:36:01.279045   34022 main.go:141] libmachine: (ha-631834) Creating domain...
	I0927 00:36:02.458607   34022 main.go:141] libmachine: (ha-631834) Waiting to get IP...
	I0927 00:36:02.459345   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.459714   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.459736   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.459698   34045 retry.go:31] will retry after 212.922851ms: waiting for machine to come up
	I0927 00:36:02.674121   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.674559   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.674578   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.674520   34045 retry.go:31] will retry after 258.802525ms: waiting for machine to come up
	I0927 00:36:02.934927   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.935352   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.935388   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.935333   34045 retry.go:31] will retry after 385.263435ms: waiting for machine to come up
	I0927 00:36:03.321940   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:03.322382   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:03.322457   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:03.322352   34045 retry.go:31] will retry after 458.033114ms: waiting for machine to come up
	I0927 00:36:03.782012   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:03.782379   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:03.782406   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:03.782329   34045 retry.go:31] will retry after 619.891619ms: waiting for machine to come up
	I0927 00:36:04.404184   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:04.404742   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:04.404769   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:04.404698   34045 retry.go:31] will retry after 668.661978ms: waiting for machine to come up
	I0927 00:36:05.074541   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:05.074956   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:05.074981   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:05.074931   34045 retry.go:31] will retry after 1.139973505s: waiting for machine to come up
	I0927 00:36:06.216868   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:06.217267   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:06.217283   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:06.217233   34045 retry.go:31] will retry after 1.161217409s: waiting for machine to come up
	I0927 00:36:07.380453   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:07.380855   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:07.380881   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:07.380831   34045 retry.go:31] will retry after 1.625874527s: waiting for machine to come up
	I0927 00:36:09.008452   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:09.008818   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:09.008846   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:09.008771   34045 retry.go:31] will retry after 1.776898319s: waiting for machine to come up
	I0927 00:36:10.787443   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:10.787818   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:10.787869   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:10.787802   34045 retry.go:31] will retry after 2.764791752s: waiting for machine to come up
	I0927 00:36:13.556224   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:13.556671   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:13.556691   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:13.556636   34045 retry.go:31] will retry after 2.903263764s: waiting for machine to come up
	I0927 00:36:16.461156   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:16.461600   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:16.461623   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:16.461567   34045 retry.go:31] will retry after 4.074333009s: waiting for machine to come up
	I0927 00:36:20.540756   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.541254   34022 main.go:141] libmachine: (ha-631834) Found IP for machine: 192.168.39.4
	I0927 00:36:20.541349   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has current primary IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.541373   34022 main.go:141] libmachine: (ha-631834) Reserving static IP address...
	I0927 00:36:20.541632   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find host DHCP lease matching {name: "ha-631834", mac: "52:54:00:bc:09:a5", ip: "192.168.39.4"} in network mk-ha-631834
	I0927 00:36:20.614776   34022 main.go:141] libmachine: (ha-631834) DBG | Getting to WaitForSSH function...
	I0927 00:36:20.614808   34022 main.go:141] libmachine: (ha-631834) Reserved static IP address: 192.168.39.4
	I0927 00:36:20.614821   34022 main.go:141] libmachine: (ha-631834) Waiting for SSH to be available...
	I0927 00:36:20.617249   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.617621   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.617669   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.617792   34022 main.go:141] libmachine: (ha-631834) DBG | Using SSH client type: external
	I0927 00:36:20.617816   34022 main.go:141] libmachine: (ha-631834) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa (-rw-------)
	I0927 00:36:20.617844   34022 main.go:141] libmachine: (ha-631834) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:36:20.617868   34022 main.go:141] libmachine: (ha-631834) DBG | About to run SSH command:
	I0927 00:36:20.617881   34022 main.go:141] libmachine: (ha-631834) DBG | exit 0
	I0927 00:36:20.747285   34022 main.go:141] libmachine: (ha-631834) DBG | SSH cmd err, output: <nil>: 
	I0927 00:36:20.747567   34022 main.go:141] libmachine: (ha-631834) KVM machine creation complete!
	I0927 00:36:20.747871   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:20.748388   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:20.748565   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:20.748693   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:36:20.748716   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:20.749749   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:36:20.749770   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:36:20.749777   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:36:20.749785   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.751512   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.751780   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.751802   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.751906   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.752078   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.752231   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.752323   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.752604   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.752800   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.752812   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:36:20.862622   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:36:20.862650   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:36:20.862657   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.865244   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.865552   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.865577   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.865716   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.865945   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.866143   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.866275   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.866412   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.866570   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.866579   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:36:20.980090   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:36:20.980221   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:36:20.980236   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:36:20.980246   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:20.980486   34022 buildroot.go:166] provisioning hostname "ha-631834"
	I0927 00:36:20.980510   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:20.980686   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.982900   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.983180   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.983205   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.983320   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.983483   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.983596   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.983828   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.983972   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.984135   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.984146   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834 && echo "ha-631834" | sudo tee /etc/hostname
	I0927 00:36:21.110505   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834
	
	I0927 00:36:21.110541   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.113154   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.113483   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.113507   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.113696   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.113890   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.114053   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.114223   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.114372   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:21.114529   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:21.114543   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:36:21.236395   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:36:21.236427   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:36:21.236467   34022 buildroot.go:174] setting up certificates
	I0927 00:36:21.236480   34022 provision.go:84] configureAuth start
	I0927 00:36:21.236491   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:21.236728   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:21.239154   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.239450   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.239489   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.239661   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.241898   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.242200   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.242217   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.242388   34022 provision.go:143] copyHostCerts
	I0927 00:36:21.242413   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:36:21.242453   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:36:21.242464   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:36:21.242539   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:36:21.242644   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:36:21.242668   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:36:21.242676   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:36:21.242718   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:36:21.242794   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:36:21.242826   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:36:21.242835   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:36:21.242869   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:36:21.242951   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834 san=[127.0.0.1 192.168.39.4 ha-631834 localhost minikube]
	I0927 00:36:21.481677   34022 provision.go:177] copyRemoteCerts
	I0927 00:36:21.481751   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:36:21.481779   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.484532   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.484907   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.484938   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.485150   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.485340   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.485466   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.485603   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:21.574275   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:36:21.574368   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:36:21.598740   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:36:21.598797   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0927 00:36:21.622342   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:36:21.622427   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 00:36:21.646827   34022 provision.go:87] duration metric: took 410.33255ms to configureAuth
	I0927 00:36:21.646853   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:36:21.647098   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:21.647240   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.650164   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.650494   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.650526   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.650702   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.650908   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.651062   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.651244   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.651427   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:21.651615   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:21.651635   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:36:21.880863   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:36:21.880887   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:36:21.880895   34022 main.go:141] libmachine: (ha-631834) Calling .GetURL
	I0927 00:36:21.882096   34022 main.go:141] libmachine: (ha-631834) DBG | Using libvirt version 6000000
	I0927 00:36:21.884523   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.884856   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.884898   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.885077   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:36:21.885091   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:36:21.885098   34022 client.go:171] duration metric: took 21.063880971s to LocalClient.Create
	I0927 00:36:21.885116   34022 start.go:167] duration metric: took 21.063936629s to libmachine.API.Create "ha-631834"
	I0927 00:36:21.885126   34022 start.go:293] postStartSetup for "ha-631834" (driver="kvm2")
	I0927 00:36:21.885144   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:36:21.885159   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:21.885420   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:36:21.885488   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.887537   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.887790   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.887814   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.887928   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.888084   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.888274   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.888404   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:21.975055   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:36:21.979759   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:36:21.979784   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:36:21.979851   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:36:21.979941   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:36:21.979953   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:36:21.980080   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:36:21.990531   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:36:22.014932   34022 start.go:296] duration metric: took 129.791559ms for postStartSetup
	I0927 00:36:22.015008   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:22.015658   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:22.018265   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.018611   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.018639   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.018899   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:22.019096   34022 start.go:128] duration metric: took 21.215390892s to createHost
	I0927 00:36:22.019120   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.021302   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.021602   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.021623   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.021782   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.021953   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.022148   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.022286   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.022416   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:22.022581   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:22.022591   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:36:22.136170   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397382.093993681
	
	I0927 00:36:22.136192   34022 fix.go:216] guest clock: 1727397382.093993681
	I0927 00:36:22.136202   34022 fix.go:229] Guest: 2024-09-27 00:36:22.093993681 +0000 UTC Remote: 2024-09-27 00:36:22.019107365 +0000 UTC m=+21.319607179 (delta=74.886316ms)
	I0927 00:36:22.136269   34022 fix.go:200] guest clock delta is within tolerance: 74.886316ms
	I0927 00:36:22.136280   34022 start.go:83] releasing machines lock for "ha-631834", held for 21.332646091s
	I0927 00:36:22.136304   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.136563   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:22.139383   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.139736   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.139759   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.139946   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140424   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140576   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140640   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:36:22.140680   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.140773   34022 ssh_runner.go:195] Run: cat /version.json
	I0927 00:36:22.140798   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.143090   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143433   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.143461   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143480   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143586   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.143765   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.143827   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.143847   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143916   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.143997   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.144069   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:22.144133   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.144262   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.144408   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:22.243060   34022 ssh_runner.go:195] Run: systemctl --version
	I0927 00:36:22.259700   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:36:22.415956   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:36:22.422185   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:36:22.422251   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:36:22.438630   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:36:22.438655   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:36:22.438724   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:36:22.456456   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:36:22.471488   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:36:22.471543   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:36:22.486032   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:36:22.500571   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:36:22.621816   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:36:22.772846   34022 docker.go:233] disabling docker service ...
	I0927 00:36:22.772913   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:36:22.787944   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:36:22.801143   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:36:22.939572   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:36:23.057695   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:36:23.072091   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:36:23.090934   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:36:23.090997   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.101768   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:36:23.101839   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.112607   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.122981   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.133563   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:36:23.144443   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.155241   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.172932   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.184071   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:36:23.194018   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:36:23.194075   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:36:23.207498   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:36:23.216852   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:36:23.351326   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:36:23.449204   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:36:23.449280   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:36:23.454200   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:36:23.454262   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:36:23.458028   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:36:23.497638   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:36:23.497711   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:36:23.525615   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:36:23.555870   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:36:23.557109   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:23.559689   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:23.559978   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:23.560009   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:23.560187   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:36:23.564687   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:36:23.577852   34022 kubeadm.go:883] updating cluster {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:36:23.577958   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:23.578011   34022 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:36:23.610284   34022 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 00:36:23.610361   34022 ssh_runner.go:195] Run: which lz4
	I0927 00:36:23.614339   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0927 00:36:23.614430   34022 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 00:36:23.618714   34022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 00:36:23.618740   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 00:36:24.972066   34022 crio.go:462] duration metric: took 1.357668477s to copy over tarball
	I0927 00:36:24.972137   34022 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 00:36:26.952440   34022 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.98028123s)
	I0927 00:36:26.952467   34022 crio.go:469] duration metric: took 1.9803713s to extract the tarball
	I0927 00:36:26.952477   34022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 00:36:26.990046   34022 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:36:27.038137   34022 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:36:27.038171   34022 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:36:27.038180   34022 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.1 crio true true} ...
	I0927 00:36:27.038337   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:36:27.038423   34022 ssh_runner.go:195] Run: crio config
	I0927 00:36:27.087406   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:27.087427   34022 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 00:36:27.087436   34022 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:36:27.087455   34022 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-631834 NodeName:ha-631834 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:36:27.087584   34022 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-631834"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:36:27.087605   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:36:27.087640   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:36:27.104338   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:36:27.104430   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:36:27.104475   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:36:27.114532   34022 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:36:27.114597   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 00:36:27.125576   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0927 00:36:27.143174   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:36:27.159783   34022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0927 00:36:27.177110   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0927 00:36:27.193945   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:36:27.197827   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:36:27.210366   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:36:27.336946   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:36:27.354991   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.4
	I0927 00:36:27.355012   34022 certs.go:194] generating shared ca certs ...
	I0927 00:36:27.355030   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.355205   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:36:27.355254   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:36:27.355267   34022 certs.go:256] generating profile certs ...
	I0927 00:36:27.355348   34022 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:36:27.355370   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt with IP's: []
	I0927 00:36:27.682062   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt ...
	I0927 00:36:27.682092   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt: {Name:mk8f3bba10f88a791b79bb763eef9fe3f7d34390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.682274   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key ...
	I0927 00:36:27.682289   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key: {Name:mk503d08fe6b48c31ea153960f6273dc934010ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.682389   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6
	I0927 00:36:27.682409   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.254]
	I0927 00:36:27.752883   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 ...
	I0927 00:36:27.752911   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6: {Name:mka090c8b2557cb246619f729c0272d8e73ab4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.753091   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6 ...
	I0927 00:36:27.753107   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6: {Name:mk32c435c509e1da50a9d54c9a27e1ed3da8b7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.753219   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:36:27.753364   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:36:27.753446   34022 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:36:27.753465   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt with IP's: []
	I0927 00:36:27.888870   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt ...
	I0927 00:36:27.888902   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt: {Name:mk428f3282cdd0b71edcb5a948cacf34b7f69074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.889093   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key ...
	I0927 00:36:27.889107   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key: {Name:mk092e7e928ba5ffe819bbe344c977ddad72812f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.889205   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:36:27.889223   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:36:27.889233   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:36:27.889246   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:36:27.889256   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:36:27.889266   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:36:27.889278   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:36:27.889288   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:36:27.889339   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:36:27.889372   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:36:27.889381   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:36:27.889401   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:36:27.889423   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:36:27.889452   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:36:27.889488   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:36:27.889514   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:36:27.889528   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:27.889540   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:36:27.890073   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:36:27.915212   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:36:27.938433   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:36:27.961704   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:36:27.985172   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 00:36:28.008248   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:36:28.031157   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:36:28.053875   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:36:28.077746   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:36:28.100790   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:36:28.126305   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:36:28.148839   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:36:28.165086   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:36:28.171319   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:36:28.183230   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.187750   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.187803   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.193649   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:36:28.204802   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:36:28.215518   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.219871   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.219914   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.225559   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:36:28.236534   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:36:28.247541   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.251956   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.252002   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.257569   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:36:28.268557   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:36:28.272624   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:36:28.272681   34022 kubeadm.go:392] StartCluster: {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:36:28.272765   34022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:36:28.272803   34022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:36:28.310788   34022 cri.go:89] found id: ""
	I0927 00:36:28.310863   34022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:36:28.321240   34022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:36:28.331038   34022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:36:28.340878   34022 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:36:28.340897   34022 kubeadm.go:157] found existing configuration files:
	
	I0927 00:36:28.340934   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:36:28.350170   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:36:28.350236   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:36:28.359911   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:36:28.369100   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:36:28.369152   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:36:28.378846   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:36:28.388020   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:36:28.388070   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:36:28.397520   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:36:28.406575   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:36:28.406618   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:36:28.415973   34022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 00:36:28.517602   34022 kubeadm.go:310] W0927 00:36:28.474729     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:36:28.518499   34022 kubeadm.go:310] W0927 00:36:28.475845     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:36:28.620411   34022 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:36:39.196766   34022 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:36:39.196817   34022 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:36:39.196897   34022 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:36:39.197042   34022 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:36:39.197146   34022 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:36:39.197242   34022 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:36:39.198695   34022 out.go:235]   - Generating certificates and keys ...
	I0927 00:36:39.198783   34022 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:36:39.198874   34022 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:36:39.198967   34022 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:36:39.199046   34022 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:36:39.199135   34022 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:36:39.199205   34022 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:36:39.199287   34022 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:36:39.199453   34022 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-631834 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0927 00:36:39.199543   34022 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:36:39.199699   34022 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-631834 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0927 00:36:39.199796   34022 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:36:39.199890   34022 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:36:39.199953   34022 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:36:39.200035   34022 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:36:39.200121   34022 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:36:39.200212   34022 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:36:39.200291   34022 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:36:39.200372   34022 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:36:39.200439   34022 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:36:39.200531   34022 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:36:39.200632   34022 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:36:39.202948   34022 out.go:235]   - Booting up control plane ...
	I0927 00:36:39.203043   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:36:39.203122   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:36:39.203192   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:36:39.203290   34022 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:36:39.203381   34022 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:36:39.203419   34022 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:36:39.203571   34022 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:36:39.203689   34022 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:36:39.203745   34022 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.136312ms
	I0927 00:36:39.203833   34022 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:36:39.203916   34022 kubeadm.go:310] [api-check] The API server is healthy after 5.885001913s
	I0927 00:36:39.204050   34022 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:36:39.204208   34022 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:36:39.204298   34022 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:36:39.204479   34022 kubeadm.go:310] [mark-control-plane] Marking the node ha-631834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:36:39.204542   34022 kubeadm.go:310] [bootstrap-token] Using token: a2inhk.us1mqrkt01ocu6ik
	I0927 00:36:39.205835   34022 out.go:235]   - Configuring RBAC rules ...
	I0927 00:36:39.205939   34022 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:36:39.206027   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:36:39.206203   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:36:39.206359   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:36:39.206513   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:36:39.206623   34022 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:36:39.206783   34022 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:36:39.206841   34022 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:36:39.206903   34022 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:36:39.206913   34022 kubeadm.go:310] 
	I0927 00:36:39.206990   34022 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:36:39.207004   34022 kubeadm.go:310] 
	I0927 00:36:39.207128   34022 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:36:39.207138   34022 kubeadm.go:310] 
	I0927 00:36:39.207188   34022 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:36:39.207263   34022 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:36:39.207324   34022 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:36:39.207333   34022 kubeadm.go:310] 
	I0927 00:36:39.207377   34022 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:36:39.207383   34022 kubeadm.go:310] 
	I0927 00:36:39.207423   34022 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:36:39.207429   34022 kubeadm.go:310] 
	I0927 00:36:39.207471   34022 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:36:39.207543   34022 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:36:39.207603   34022 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:36:39.207611   34022 kubeadm.go:310] 
	I0927 00:36:39.207679   34022 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:36:39.207747   34022 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:36:39.207752   34022 kubeadm.go:310] 
	I0927 00:36:39.207858   34022 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a2inhk.us1mqrkt01ocu6ik \
	I0927 00:36:39.207978   34022 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 00:36:39.208009   34022 kubeadm.go:310] 	--control-plane 
	I0927 00:36:39.208024   34022 kubeadm.go:310] 
	I0927 00:36:39.208133   34022 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:36:39.208140   34022 kubeadm.go:310] 
	I0927 00:36:39.208217   34022 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a2inhk.us1mqrkt01ocu6ik \
	I0927 00:36:39.208329   34022 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 00:36:39.208342   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:39.208348   34022 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 00:36:39.209742   34022 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 00:36:39.210824   34022 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 00:36:39.216482   34022 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 00:36:39.216498   34022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 00:36:39.238534   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 00:36:39.596628   34022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:36:39.596683   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:39.596724   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834 minikube.k8s.io/updated_at=2024_09_27T00_36_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=true
	I0927 00:36:39.626142   34022 ops.go:34] apiserver oom_adj: -16
	I0927 00:36:39.790024   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:40.291013   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:40.790408   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:41.290433   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:41.790624   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:42.290399   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:42.790081   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:43.290106   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:43.383411   34022 kubeadm.go:1113] duration metric: took 3.786772854s to wait for elevateKubeSystemPrivileges
	I0927 00:36:43.383449   34022 kubeadm.go:394] duration metric: took 15.110773171s to StartCluster
	I0927 00:36:43.383466   34022 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:43.383525   34022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:43.384159   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:43.384353   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:36:43.384357   34022 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:43.384379   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:36:43.384387   34022 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 00:36:43.384482   34022 addons.go:69] Setting storage-provisioner=true in profile "ha-631834"
	I0927 00:36:43.384503   34022 addons.go:234] Setting addon storage-provisioner=true in "ha-631834"
	I0927 00:36:43.384502   34022 addons.go:69] Setting default-storageclass=true in profile "ha-631834"
	I0927 00:36:43.384521   34022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-631834"
	I0927 00:36:43.384535   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:36:43.384567   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:43.384839   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.384866   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.384944   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.384960   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.399817   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0927 00:36:43.399897   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0927 00:36:43.400293   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.400363   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.400865   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.400886   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.401031   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.401063   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.401250   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.401432   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.401539   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.402075   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.402108   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.403551   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:43.403892   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 00:36:43.404454   34022 cert_rotation.go:140] Starting client certificate rotation controller
	I0927 00:36:43.404728   34022 addons.go:234] Setting addon default-storageclass=true in "ha-631834"
	I0927 00:36:43.404772   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:36:43.405147   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.405179   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.417112   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0927 00:36:43.417520   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.418127   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.418155   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.418477   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.418681   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.419924   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0927 00:36:43.420288   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.420380   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:43.420672   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.420688   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.420969   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.421504   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.421551   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.422256   34022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:36:43.423360   34022 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:36:43.423375   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:36:43.423389   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:43.426316   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.426764   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:43.426778   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.426969   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:43.427109   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:43.427219   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:43.427355   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:43.435962   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0927 00:36:43.436362   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.436730   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.436746   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.437076   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.437260   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.438594   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:43.438749   34022 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:36:43.438763   34022 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:36:43.438784   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:43.441264   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.441750   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:43.441794   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:43.441824   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.441923   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:43.442101   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:43.442225   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:43.549239   34022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:36:43.572279   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:36:43.662399   34022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:36:44.397951   34022 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 00:36:44.398036   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398060   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398143   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398170   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398344   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398359   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398368   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398374   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398388   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398402   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398409   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398416   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398649   34022 main.go:141] libmachine: (ha-631834) DBG | Closing plugin on server side
	I0927 00:36:44.398666   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398675   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398678   34022 main.go:141] libmachine: (ha-631834) DBG | Closing plugin on server side
	I0927 00:36:44.398694   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398708   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398760   34022 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 00:36:44.398784   34022 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 00:36:44.398889   34022 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0927 00:36:44.398901   34022 round_trippers.go:469] Request Headers:
	I0927 00:36:44.398911   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:36:44.398920   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:36:44.417589   34022 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0927 00:36:44.418067   34022 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0927 00:36:44.418079   34022 round_trippers.go:469] Request Headers:
	I0927 00:36:44.418087   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:36:44.418091   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:36:44.418095   34022 round_trippers.go:473]     Content-Type: application/json
	I0927 00:36:44.420490   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:36:44.420636   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.420647   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.420904   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.420921   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.422479   34022 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 00:36:44.423550   34022 addons.go:510] duration metric: took 1.039159873s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 00:36:44.423595   34022 start.go:246] waiting for cluster config update ...
	I0927 00:36:44.423613   34022 start.go:255] writing updated cluster config ...
	I0927 00:36:44.425272   34022 out.go:201] 
	I0927 00:36:44.426803   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:44.426894   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:44.428362   34022 out.go:177] * Starting "ha-631834-m02" control-plane node in "ha-631834" cluster
	I0927 00:36:44.429446   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:44.429473   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:36:44.429577   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:36:44.429598   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:36:44.429705   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:44.429910   34022 start.go:360] acquireMachinesLock for ha-631834-m02: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:36:44.429964   34022 start.go:364] duration metric: took 31.862µs to acquireMachinesLock for "ha-631834-m02"
	I0927 00:36:44.429988   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:44.430077   34022 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0927 00:36:44.431533   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:36:44.431627   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:44.431667   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:44.446949   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0927 00:36:44.447487   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:44.447999   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:44.448029   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:44.448325   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:44.448539   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:36:44.448658   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:36:44.448816   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:36:44.448842   34022 client.go:168] LocalClient.Create starting
	I0927 00:36:44.448876   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:36:44.448913   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:44.448937   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:44.449007   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:36:44.449034   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:44.449049   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:44.449076   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:36:44.449088   34022 main.go:141] libmachine: (ha-631834-m02) Calling .PreCreateCheck
	I0927 00:36:44.449246   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:36:44.449638   34022 main.go:141] libmachine: Creating machine...
	I0927 00:36:44.449653   34022 main.go:141] libmachine: (ha-631834-m02) Calling .Create
	I0927 00:36:44.449792   34022 main.go:141] libmachine: (ha-631834-m02) Creating KVM machine...
	I0927 00:36:44.451021   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found existing default KVM network
	I0927 00:36:44.451178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found existing private KVM network mk-ha-631834
	I0927 00:36:44.451353   34022 main.go:141] libmachine: (ha-631834-m02) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 ...
	I0927 00:36:44.451372   34022 main.go:141] libmachine: (ha-631834-m02) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:36:44.451445   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.451350   34386 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:44.451537   34022 main.go:141] libmachine: (ha-631834-m02) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:36:44.687379   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.687222   34386 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa...
	I0927 00:36:44.751062   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.750967   34386 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/ha-631834-m02.rawdisk...
	I0927 00:36:44.751087   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Writing magic tar header
	I0927 00:36:44.751100   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Writing SSH key tar header
	I0927 00:36:44.751178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.751110   34386 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 ...
	I0927 00:36:44.751293   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02
	I0927 00:36:44.751324   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:36:44.751344   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 (perms=drwx------)
	I0927 00:36:44.751365   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:36:44.751378   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:36:44.751392   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:36:44.751400   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:36:44.751408   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:44.751425   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:36:44.751434   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:36:44.751446   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:36:44.751456   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:36:44.751467   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home
	I0927 00:36:44.751479   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Skipping /home - not owner
	I0927 00:36:44.751504   34022 main.go:141] libmachine: (ha-631834-m02) Creating domain...
	I0927 00:36:44.752461   34022 main.go:141] libmachine: (ha-631834-m02) define libvirt domain using xml: 
	I0927 00:36:44.752482   34022 main.go:141] libmachine: (ha-631834-m02) <domain type='kvm'>
	I0927 00:36:44.752492   34022 main.go:141] libmachine: (ha-631834-m02)   <name>ha-631834-m02</name>
	I0927 00:36:44.752511   34022 main.go:141] libmachine: (ha-631834-m02)   <memory unit='MiB'>2200</memory>
	I0927 00:36:44.752523   34022 main.go:141] libmachine: (ha-631834-m02)   <vcpu>2</vcpu>
	I0927 00:36:44.752535   34022 main.go:141] libmachine: (ha-631834-m02)   <features>
	I0927 00:36:44.752546   34022 main.go:141] libmachine: (ha-631834-m02)     <acpi/>
	I0927 00:36:44.752559   34022 main.go:141] libmachine: (ha-631834-m02)     <apic/>
	I0927 00:36:44.752569   34022 main.go:141] libmachine: (ha-631834-m02)     <pae/>
	I0927 00:36:44.752577   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.752583   34022 main.go:141] libmachine: (ha-631834-m02)   </features>
	I0927 00:36:44.752589   34022 main.go:141] libmachine: (ha-631834-m02)   <cpu mode='host-passthrough'>
	I0927 00:36:44.752594   34022 main.go:141] libmachine: (ha-631834-m02)   
	I0927 00:36:44.752600   34022 main.go:141] libmachine: (ha-631834-m02)   </cpu>
	I0927 00:36:44.752605   34022 main.go:141] libmachine: (ha-631834-m02)   <os>
	I0927 00:36:44.752611   34022 main.go:141] libmachine: (ha-631834-m02)     <type>hvm</type>
	I0927 00:36:44.752616   34022 main.go:141] libmachine: (ha-631834-m02)     <boot dev='cdrom'/>
	I0927 00:36:44.752620   34022 main.go:141] libmachine: (ha-631834-m02)     <boot dev='hd'/>
	I0927 00:36:44.752628   34022 main.go:141] libmachine: (ha-631834-m02)     <bootmenu enable='no'/>
	I0927 00:36:44.752632   34022 main.go:141] libmachine: (ha-631834-m02)   </os>
	I0927 00:36:44.752654   34022 main.go:141] libmachine: (ha-631834-m02)   <devices>
	I0927 00:36:44.752673   34022 main.go:141] libmachine: (ha-631834-m02)     <disk type='file' device='cdrom'>
	I0927 00:36:44.752682   34022 main.go:141] libmachine: (ha-631834-m02)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/boot2docker.iso'/>
	I0927 00:36:44.752691   34022 main.go:141] libmachine: (ha-631834-m02)       <target dev='hdc' bus='scsi'/>
	I0927 00:36:44.752724   34022 main.go:141] libmachine: (ha-631834-m02)       <readonly/>
	I0927 00:36:44.752759   34022 main.go:141] libmachine: (ha-631834-m02)     </disk>
	I0927 00:36:44.752770   34022 main.go:141] libmachine: (ha-631834-m02)     <disk type='file' device='disk'>
	I0927 00:36:44.752786   34022 main.go:141] libmachine: (ha-631834-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:36:44.752803   34022 main.go:141] libmachine: (ha-631834-m02)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/ha-631834-m02.rawdisk'/>
	I0927 00:36:44.752813   34022 main.go:141] libmachine: (ha-631834-m02)       <target dev='hda' bus='virtio'/>
	I0927 00:36:44.752824   34022 main.go:141] libmachine: (ha-631834-m02)     </disk>
	I0927 00:36:44.752834   34022 main.go:141] libmachine: (ha-631834-m02)     <interface type='network'>
	I0927 00:36:44.752846   34022 main.go:141] libmachine: (ha-631834-m02)       <source network='mk-ha-631834'/>
	I0927 00:36:44.752860   34022 main.go:141] libmachine: (ha-631834-m02)       <model type='virtio'/>
	I0927 00:36:44.752870   34022 main.go:141] libmachine: (ha-631834-m02)     </interface>
	I0927 00:36:44.752876   34022 main.go:141] libmachine: (ha-631834-m02)     <interface type='network'>
	I0927 00:36:44.752888   34022 main.go:141] libmachine: (ha-631834-m02)       <source network='default'/>
	I0927 00:36:44.752898   34022 main.go:141] libmachine: (ha-631834-m02)       <model type='virtio'/>
	I0927 00:36:44.752907   34022 main.go:141] libmachine: (ha-631834-m02)     </interface>
	I0927 00:36:44.752917   34022 main.go:141] libmachine: (ha-631834-m02)     <serial type='pty'>
	I0927 00:36:44.752929   34022 main.go:141] libmachine: (ha-631834-m02)       <target port='0'/>
	I0927 00:36:44.752939   34022 main.go:141] libmachine: (ha-631834-m02)     </serial>
	I0927 00:36:44.752949   34022 main.go:141] libmachine: (ha-631834-m02)     <console type='pty'>
	I0927 00:36:44.752960   34022 main.go:141] libmachine: (ha-631834-m02)       <target type='serial' port='0'/>
	I0927 00:36:44.752971   34022 main.go:141] libmachine: (ha-631834-m02)     </console>
	I0927 00:36:44.752984   34022 main.go:141] libmachine: (ha-631834-m02)     <rng model='virtio'>
	I0927 00:36:44.753001   34022 main.go:141] libmachine: (ha-631834-m02)       <backend model='random'>/dev/random</backend>
	I0927 00:36:44.753018   34022 main.go:141] libmachine: (ha-631834-m02)     </rng>
	I0927 00:36:44.753035   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.753047   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.753059   34022 main.go:141] libmachine: (ha-631834-m02)   </devices>
	I0927 00:36:44.753068   34022 main.go:141] libmachine: (ha-631834-m02) </domain>
	I0927 00:36:44.753080   34022 main.go:141] libmachine: (ha-631834-m02) 
	I0927 00:36:44.759470   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:b2:c3:d6 in network default
	I0927 00:36:44.759943   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:44.759962   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring networks are active...
	I0927 00:36:44.760578   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring network default is active
	I0927 00:36:44.760849   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring network mk-ha-631834 is active
	I0927 00:36:44.761213   34022 main.go:141] libmachine: (ha-631834-m02) Getting domain xml...
	I0927 00:36:44.761860   34022 main.go:141] libmachine: (ha-631834-m02) Creating domain...
	I0927 00:36:45.965093   34022 main.go:141] libmachine: (ha-631834-m02) Waiting to get IP...
	I0927 00:36:45.965811   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:45.966210   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:45.966250   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:45.966193   34386 retry.go:31] will retry after 219.366954ms: waiting for machine to come up
	I0927 00:36:46.187549   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.188001   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.188031   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.187959   34386 retry.go:31] will retry after 344.351684ms: waiting for machine to come up
	I0927 00:36:46.533384   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.533893   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.533918   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.533845   34386 retry.go:31] will retry after 436.44682ms: waiting for machine to come up
	I0927 00:36:46.971366   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.971845   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.971881   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.971792   34386 retry.go:31] will retry after 518.722723ms: waiting for machine to come up
	I0927 00:36:47.492370   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:47.492814   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:47.492836   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:47.492761   34386 retry.go:31] will retry after 458.476026ms: waiting for machine to come up
	I0927 00:36:47.952367   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:47.952947   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:47.952968   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:47.952905   34386 retry.go:31] will retry after 873.835695ms: waiting for machine to come up
	I0927 00:36:48.827782   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:48.828192   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:48.828221   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:48.828139   34386 retry.go:31] will retry after 1.00855597s: waiting for machine to come up
	I0927 00:36:49.838599   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:49.838959   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:49.838982   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:49.838927   34386 retry.go:31] will retry after 1.38923332s: waiting for machine to come up
	I0927 00:36:51.230578   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:51.231036   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:51.231061   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:51.231006   34386 retry.go:31] will retry after 1.140830763s: waiting for machine to come up
	I0927 00:36:52.373231   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:52.373666   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:52.373692   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:52.373621   34386 retry.go:31] will retry after 2.064225387s: waiting for machine to come up
	I0927 00:36:54.440421   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:54.440877   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:54.440901   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:54.440817   34386 retry.go:31] will retry after 2.699234582s: waiting for machine to come up
	I0927 00:36:57.141531   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:57.141923   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:57.141944   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:57.141879   34386 retry.go:31] will retry after 2.876736711s: waiting for machine to come up
	I0927 00:37:00.019979   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:00.020397   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:37:00.020415   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:37:00.020358   34386 retry.go:31] will retry after 2.739686124s: waiting for machine to come up
	I0927 00:37:02.761974   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:02.762423   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:37:02.762478   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:37:02.762348   34386 retry.go:31] will retry after 3.780270458s: waiting for machine to come up
	I0927 00:37:06.544970   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.545486   34022 main.go:141] libmachine: (ha-631834-m02) Found IP for machine: 192.168.39.184
	I0927 00:37:06.545515   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has current primary IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.545524   34022 main.go:141] libmachine: (ha-631834-m02) Reserving static IP address...
	I0927 00:37:06.545889   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find host DHCP lease matching {name: "ha-631834-m02", mac: "52:54:00:f9:6f:a2", ip: "192.168.39.184"} in network mk-ha-631834
	I0927 00:37:06.617028   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Getting to WaitForSSH function...
	I0927 00:37:06.617058   34022 main.go:141] libmachine: (ha-631834-m02) Reserved static IP address: 192.168.39.184
	I0927 00:37:06.617127   34022 main.go:141] libmachine: (ha-631834-m02) Waiting for SSH to be available...
	I0927 00:37:06.619198   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.619549   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834
	I0927 00:37:06.619573   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find defined IP address of network mk-ha-631834 interface with MAC address 52:54:00:f9:6f:a2
	I0927 00:37:06.619711   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH client type: external
	I0927 00:37:06.619738   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa (-rw-------)
	I0927 00:37:06.619767   34022 main.go:141] libmachine: (ha-631834-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:37:06.619784   34022 main.go:141] libmachine: (ha-631834-m02) DBG | About to run SSH command:
	I0927 00:37:06.619798   34022 main.go:141] libmachine: (ha-631834-m02) DBG | exit 0
	I0927 00:37:06.623260   34022 main.go:141] libmachine: (ha-631834-m02) DBG | SSH cmd err, output: exit status 255: 
	I0927 00:37:06.623273   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0927 00:37:06.623281   34022 main.go:141] libmachine: (ha-631834-m02) DBG | command : exit 0
	I0927 00:37:06.623290   34022 main.go:141] libmachine: (ha-631834-m02) DBG | err     : exit status 255
	I0927 00:37:06.623297   34022 main.go:141] libmachine: (ha-631834-m02) DBG | output  : 
	I0927 00:37:09.623967   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Getting to WaitForSSH function...
	I0927 00:37:09.626758   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.627251   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.627285   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.627413   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH client type: external
	I0927 00:37:09.627435   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa (-rw-------)
	I0927 00:37:09.627472   34022 main.go:141] libmachine: (ha-631834-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:37:09.627484   34022 main.go:141] libmachine: (ha-631834-m02) DBG | About to run SSH command:
	I0927 00:37:09.627495   34022 main.go:141] libmachine: (ha-631834-m02) DBG | exit 0
	I0927 00:37:09.751226   34022 main.go:141] libmachine: (ha-631834-m02) DBG | SSH cmd err, output: <nil>: 
	I0927 00:37:09.751504   34022 main.go:141] libmachine: (ha-631834-m02) KVM machine creation complete!
	I0927 00:37:09.751804   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:37:09.752329   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:09.752502   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:09.752645   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:37:09.752657   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetState
	I0927 00:37:09.753685   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:37:09.753695   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:37:09.753702   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:37:09.753707   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.755579   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.755850   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.755881   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.755998   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.756145   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.756274   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.756413   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.756589   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.756825   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.756839   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:37:09.854682   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:37:09.854708   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:37:09.854718   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.857509   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.857847   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.857874   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.857977   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.858161   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.858335   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.858490   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.858645   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.858795   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.858806   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:37:09.960162   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:37:09.960233   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:37:09.960242   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:37:09.960250   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:09.960507   34022 buildroot.go:166] provisioning hostname "ha-631834-m02"
	I0927 00:37:09.960550   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:09.960744   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.963548   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.963921   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.963943   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.964085   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.964256   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.964403   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.964542   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.964683   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.964874   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.964887   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834-m02 && echo "ha-631834-m02" | sudo tee /etc/hostname
	I0927 00:37:10.077518   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834-m02
	
	I0927 00:37:10.077550   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.080178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.080540   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.080573   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.080695   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.080848   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.080953   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.081049   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.081209   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.081417   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.081444   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:37:10.188307   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:37:10.188350   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:37:10.188371   34022 buildroot.go:174] setting up certificates
	I0927 00:37:10.188381   34022 provision.go:84] configureAuth start
	I0927 00:37:10.188395   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:10.188651   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.191227   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.191601   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.191637   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.191838   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.194575   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.195339   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.195365   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.195518   34022 provision.go:143] copyHostCerts
	I0927 00:37:10.195546   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:37:10.195575   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:37:10.195584   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:37:10.195648   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:37:10.195719   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:37:10.195736   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:37:10.195740   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:37:10.195763   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:37:10.195803   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:37:10.195819   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:37:10.195824   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:37:10.195844   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:37:10.195907   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834-m02 san=[127.0.0.1 192.168.39.184 ha-631834-m02 localhost minikube]
	I0927 00:37:10.245727   34022 provision.go:177] copyRemoteCerts
	I0927 00:37:10.245778   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:37:10.245798   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.248269   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.248597   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.248623   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.248784   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.248960   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.249076   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.249199   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.331285   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:37:10.331361   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:37:10.357400   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:37:10.357470   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:37:10.381613   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:37:10.381680   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:37:10.404641   34022 provision.go:87] duration metric: took 216.247596ms to configureAuth
	I0927 00:37:10.404666   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:37:10.404826   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:10.404895   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.407260   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.407584   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.407606   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.407813   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.407999   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.408158   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.408283   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.408456   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.408663   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.408684   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:37:10.641711   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:37:10.641732   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:37:10.641740   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetURL
	I0927 00:37:10.642949   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using libvirt version 6000000
	I0927 00:37:10.645171   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.645559   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.645584   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.645775   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:37:10.645789   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:37:10.645796   34022 client.go:171] duration metric: took 26.196945191s to LocalClient.Create
	I0927 00:37:10.645815   34022 start.go:167] duration metric: took 26.197002465s to libmachine.API.Create "ha-631834"
	I0927 00:37:10.645824   34022 start.go:293] postStartSetup for "ha-631834-m02" (driver="kvm2")
	I0927 00:37:10.645834   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:37:10.645850   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.646066   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:37:10.646101   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.648185   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.648596   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.648623   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.648794   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.648930   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.649065   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.649169   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.730488   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:37:10.734725   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:37:10.734745   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:37:10.734795   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:37:10.734865   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:37:10.734874   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:37:10.734948   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:37:10.746203   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:37:10.770218   34022 start.go:296] duration metric: took 124.382795ms for postStartSetup
	I0927 00:37:10.770261   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:37:10.770829   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.773277   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.773651   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.773680   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.773884   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:37:10.774086   34022 start.go:128] duration metric: took 26.343999443s to createHost
	I0927 00:37:10.774110   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.775957   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.776258   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.776284   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.776391   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.776554   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.776671   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.776790   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.776904   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.777080   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.777095   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:37:10.876642   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397430.856709211
	
	I0927 00:37:10.876668   34022 fix.go:216] guest clock: 1727397430.856709211
	I0927 00:37:10.876675   34022 fix.go:229] Guest: 2024-09-27 00:37:10.856709211 +0000 UTC Remote: 2024-09-27 00:37:10.774098108 +0000 UTC m=+70.074597703 (delta=82.611103ms)
	I0927 00:37:10.876688   34022 fix.go:200] guest clock delta is within tolerance: 82.611103ms
	I0927 00:37:10.876693   34022 start.go:83] releasing machines lock for "ha-631834-m02", held for 26.446717018s
	I0927 00:37:10.876711   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.876935   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.879789   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.880133   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.880157   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.882420   34022 out.go:177] * Found network options:
	I0927 00:37:10.883855   34022 out.go:177]   - NO_PROXY=192.168.39.4
	W0927 00:37:10.885148   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:37:10.885174   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885627   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885793   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885874   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:37:10.885914   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	W0927 00:37:10.885995   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:37:10.886064   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:37:10.886085   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.888528   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888647   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888905   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.888931   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888961   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.888976   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.889083   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.889235   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.889256   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.889362   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.889427   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.889490   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.889571   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.889594   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:11.136304   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:37:11.142079   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:37:11.142147   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:37:11.158578   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:37:11.158606   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:37:11.158676   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:37:11.174779   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:37:11.188680   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:37:11.188733   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:37:11.201858   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:37:11.214760   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:37:11.327367   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:37:11.490795   34022 docker.go:233] disabling docker service ...
	I0927 00:37:11.490853   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:37:11.505571   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:37:11.518373   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:37:11.629152   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:37:11.740768   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:37:11.754787   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:37:11.773038   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:37:11.773110   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.783470   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:37:11.783521   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.793940   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.804039   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.814196   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:37:11.824547   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.834569   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.850743   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.861436   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:37:11.870606   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:37:11.870649   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:37:11.885756   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:37:11.897194   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:12.020445   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:37:12.107882   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:37:12.107937   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:37:12.113014   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:37:12.113056   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:37:12.116696   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:37:12.156627   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:37:12.156716   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:37:12.184776   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:37:12.214285   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:37:12.215642   34022 out.go:177]   - env NO_PROXY=192.168.39.4
	I0927 00:37:12.216858   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:12.219534   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:12.219884   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:12.219910   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:12.220066   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:37:12.224146   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:37:12.236530   34022 mustload.go:65] Loading cluster: ha-631834
	I0927 00:37:12.236743   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:12.236988   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:12.237013   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:12.251316   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0927 00:37:12.251795   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:12.252245   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:12.252265   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:12.252568   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:12.252747   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:37:12.254195   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:37:12.254474   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:12.254499   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:12.268676   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45197
	I0927 00:37:12.269168   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:12.269589   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:12.269610   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:12.269894   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:12.270042   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:37:12.270195   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.184
	I0927 00:37:12.270209   34022 certs.go:194] generating shared ca certs ...
	I0927 00:37:12.270227   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.270367   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:37:12.270424   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:37:12.270437   34022 certs.go:256] generating profile certs ...
	I0927 00:37:12.270535   34022 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:37:12.270563   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f
	I0927 00:37:12.270582   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.184 192.168.39.254]
	I0927 00:37:12.380622   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f ...
	I0927 00:37:12.380651   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f: {Name:mkabbfeb402264582fd8eeda0c7047e582633f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.380811   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f ...
	I0927 00:37:12.380824   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f: {Name:mkfa43c1b86669a0c9318db325b03ab1136e574e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.380891   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:37:12.381022   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:37:12.381184   34022 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:37:12.381199   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:37:12.381212   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:37:12.381225   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:37:12.381237   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:37:12.381255   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:37:12.381268   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:37:12.381280   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:37:12.381292   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:37:12.381342   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:37:12.381368   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:37:12.381377   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:37:12.381397   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:37:12.381429   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:37:12.381449   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:37:12.381485   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:37:12.381525   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.381538   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.381559   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:37:12.381589   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:37:12.384914   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:12.385337   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:37:12.385363   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:12.385520   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:37:12.385695   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:37:12.385849   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:37:12.385970   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:37:12.463600   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 00:37:12.469050   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 00:37:12.480901   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 00:37:12.485274   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 00:37:12.495588   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 00:37:12.499742   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 00:37:12.511921   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 00:37:12.515813   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0927 00:37:12.525592   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 00:37:12.529819   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 00:37:12.540367   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 00:37:12.544115   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 00:37:12.559955   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:37:12.585679   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:37:12.608898   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:37:12.631565   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:37:12.654159   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 00:37:12.677901   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:37:12.701023   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:37:12.723805   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:37:12.746428   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:37:12.770481   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:37:12.794514   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:37:12.817381   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 00:37:12.833441   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 00:37:12.849543   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 00:37:12.866255   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0927 00:37:12.882530   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 00:37:12.898460   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 00:37:12.914236   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 00:37:12.929892   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:37:12.935443   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:37:12.945938   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.950422   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.950473   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.956276   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:37:12.967207   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:37:12.978472   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.982807   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.982859   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.988439   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:37:12.999183   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:37:13.010278   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.014700   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.014750   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.020522   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:37:13.032168   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:37:13.036252   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:37:13.036310   34022 kubeadm.go:934] updating node {m02 192.168.39.184 8443 v1.31.1 crio true true} ...
	I0927 00:37:13.036391   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:37:13.036418   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:37:13.036450   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:37:13.053748   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:37:13.053813   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:37:13.053866   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:37:13.063832   34022 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 00:37:13.063894   34022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 00:37:13.073341   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 00:37:13.073367   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:37:13.073425   34022 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0927 00:37:13.073468   34022 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0927 00:37:13.073430   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:37:13.077722   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 00:37:13.077745   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 00:37:14.061924   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:14.080321   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:37:14.080396   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:37:14.084997   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 00:37:14.085031   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 00:37:14.368132   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:37:14.368235   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:37:14.380382   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 00:37:14.380424   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 00:37:14.663959   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 00:37:14.673981   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 00:37:14.690872   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:37:14.708362   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 00:37:14.725181   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:37:14.729204   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:37:14.741822   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:14.857927   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:37:14.875145   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:37:14.875529   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:14.875570   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:14.890402   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0927 00:37:14.890838   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:14.891373   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:14.891394   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:14.891729   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:14.891911   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:37:14.892044   34022 start.go:317] joinCluster: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:37:14.892172   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 00:37:14.892194   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:37:14.894983   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:14.895381   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:37:14.895416   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:14.895524   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:37:14.895647   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:37:14.895747   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:37:14.895865   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:37:15.056944   34022 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:37:15.056990   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mlxu9z.6ua5c3whncxwr8h0 --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443"
	I0927 00:37:37.826684   34022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mlxu9z.6ua5c3whncxwr8h0 --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443": (22.769665782s)
	I0927 00:37:37.826721   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 00:37:38.375369   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834-m02 minikube.k8s.io/updated_at=2024_09_27T00_37_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=false
	I0927 00:37:38.497089   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-631834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 00:37:38.638589   34022 start.go:319] duration metric: took 23.746539088s to joinCluster
	I0927 00:37:38.638713   34022 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:37:38.638954   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:38.640009   34022 out.go:177] * Verifying Kubernetes components...
	I0927 00:37:38.641589   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:38.888956   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:37:38.910605   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:37:38.910930   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 00:37:38.911023   34022 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0927 00:37:38.911358   34022 node_ready.go:35] waiting up to 6m0s for node "ha-631834-m02" to be "Ready" ...
	I0927 00:37:38.911504   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:38.911518   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:38.911531   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:38.911540   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:38.925042   34022 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0927 00:37:39.412340   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:39.412364   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:39.412376   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:39.412382   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:39.415703   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:39.912301   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:39.912323   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:39.912335   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:39.912340   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:39.917016   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:37:40.411994   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:40.412018   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:40.412030   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:40.412034   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:40.415279   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:40.912076   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:40.912093   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:40.912101   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:40.912106   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:40.915241   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:40.915920   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:41.412300   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:41.412322   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:41.412334   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:41.412339   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:41.416161   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:41.912228   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:41.912252   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:41.912262   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:41.912271   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:41.915784   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:42.411624   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:42.411645   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:42.411652   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:42.411658   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:42.415042   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:42.911632   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:42.911657   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:42.911669   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:42.911673   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:42.915043   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:43.412494   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:43.412511   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:43.412518   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:43.412521   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:43.416206   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:43.417057   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:43.912499   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:43.912518   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:43.912526   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:43.912531   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:43.916624   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:37:44.412544   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:44.412562   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:44.412569   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:44.412573   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:44.416020   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:44.912402   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:44.912423   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:44.912433   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:44.912437   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.001404   34022 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I0927 00:37:45.412218   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:45.412235   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:45.412242   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:45.412246   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.415114   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:45.911872   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:45.911892   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:45.911899   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:45.911903   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.915117   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:45.915711   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:46.412115   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:46.412135   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:46.412142   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:46.412147   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:46.415578   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:46.911759   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:46.911782   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:46.911789   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:46.911795   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:46.914976   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.411947   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:47.411969   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:47.411976   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:47.411981   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:47.415038   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.911959   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:47.911982   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:47.911994   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:47.911999   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:47.915156   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.915877   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:48.411937   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:48.411963   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:48.411972   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:48.411983   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:48.414801   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:48.911631   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:48.911652   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:48.911660   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:48.911665   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:48.914737   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:49.411675   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:49.411696   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:49.411704   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:49.411709   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:49.414697   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:49.911696   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:49.911715   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:49.911725   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:49.911731   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:49.914887   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:50.411769   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:50.411790   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:50.411797   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:50.411800   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:50.415046   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:50.415915   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:50.912247   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:50.912268   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:50.912275   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:50.912279   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:50.915493   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:51.412530   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:51.412551   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:51.412559   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:51.412562   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:51.415870   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:51.911834   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:51.911856   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:51.911863   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:51.911868   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:51.914920   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.411866   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:52.411886   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:52.411894   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:52.411897   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:52.415280   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.912337   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:52.912367   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:52.912379   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:52.912391   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:52.915440   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.916052   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:53.411693   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:53.411714   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:53.411722   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:53.411726   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:53.415015   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:53.912191   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:53.912210   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:53.912218   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:53.912222   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:53.914959   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:54.412320   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:54.412340   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:54.412348   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:54.412351   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:54.415317   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:54.911810   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:54.911833   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:54.911841   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:54.911844   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:54.914791   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:55.411928   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:55.411949   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:55.411957   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:55.411960   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:55.414926   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:55.415763   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:55.911749   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:55.911770   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:55.911777   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:55.911781   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:55.915450   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.412537   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.412558   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.412566   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.412569   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.416170   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.911854   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.911874   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.911883   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.911887   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.914948   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.915561   34022 node_ready.go:49] node "ha-631834-m02" has status "Ready":"True"
	I0927 00:37:56.915579   34022 node_ready.go:38] duration metric: took 18.004197532s for node "ha-631834-m02" to be "Ready" ...
	I0927 00:37:56.915587   34022 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:37:56.915672   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:37:56.915682   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.915688   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.915691   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.928535   34022 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0927 00:37:56.934559   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.934630   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-479dv
	I0927 00:37:56.934641   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.934652   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.934657   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.938001   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.940808   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.940821   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.940828   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.940832   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.943740   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.944239   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.944253   34022 pod_ready.go:82] duration metric: took 9.674838ms for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.944261   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.944310   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kg8kf
	I0927 00:37:56.944318   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.944324   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.944332   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.946515   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.947127   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.947143   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.947150   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.947157   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.949055   34022 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 00:37:56.949993   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.950013   34022 pod_ready.go:82] duration metric: took 5.744559ms for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.950024   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.950083   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834
	I0927 00:37:56.950095   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.950105   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.950113   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.952861   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.953382   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.953398   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.953408   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.953415   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.955580   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.955956   34022 pod_ready.go:93] pod "etcd-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.955972   34022 pod_ready.go:82] duration metric: took 5.938111ms for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.955979   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.956028   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m02
	I0927 00:37:56.956037   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.956044   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.956048   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.958144   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.958682   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.958694   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.958702   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.958707   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.960779   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.961169   34022 pod_ready.go:93] pod "etcd-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.961183   34022 pod_ready.go:82] duration metric: took 5.19893ms for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.961195   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.112502   34022 request.go:632] Waited for 151.252386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:37:57.112559   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:37:57.112565   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.112572   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.112576   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.115770   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.312171   34022 request.go:632] Waited for 195.713659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:57.312216   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:57.312221   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.312229   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.312232   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.315816   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.316859   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:57.316874   34022 pod_ready.go:82] duration metric: took 355.673456ms for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.316882   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.511936   34022 request.go:632] Waited for 194.980446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:37:57.512026   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:37:57.512043   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.512054   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.512063   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.515153   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.712254   34022 request.go:632] Waited for 196.382367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:57.712356   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:57.712368   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.712378   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.712386   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.716196   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.716807   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:57.716829   34022 pod_ready.go:82] duration metric: took 399.939153ms for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.716844   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.912822   34022 request.go:632] Waited for 195.90758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:37:57.912904   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:37:57.912912   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.912922   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.912933   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.916051   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.112039   34022 request.go:632] Waited for 195.329642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.112122   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.112127   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.112136   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.112143   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.115508   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.115975   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.115994   34022 pod_ready.go:82] duration metric: took 399.142534ms for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.116003   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.312103   34022 request.go:632] Waited for 196.038569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:37:58.312152   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:37:58.312162   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.312170   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.312174   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.314795   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:58.511939   34022 request.go:632] Waited for 196.327635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:58.511988   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:58.511994   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.512003   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.512010   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.515560   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.516257   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.516284   34022 pod_ready.go:82] duration metric: took 400.272757ms for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.516296   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.712241   34022 request.go:632] Waited for 195.877878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:37:58.712303   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:37:58.712310   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.712331   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.712385   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.715681   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.911944   34022 request.go:632] Waited for 195.32001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.912017   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.912022   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.912029   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.912033   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.914780   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:58.915682   34022 pod_ready.go:93] pod "kube-proxy-7n244" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.915708   34022 pod_ready.go:82] duration metric: took 399.399725ms for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.915722   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.112621   34022 request.go:632] Waited for 196.830611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:37:59.112695   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:37:59.112702   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.112711   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.112717   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.116056   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.312264   34022 request.go:632] Waited for 195.403458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:59.312315   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:59.312320   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.312371   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.312391   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.315926   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.316477   34022 pod_ready.go:93] pod "kube-proxy-x2hvh" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:59.316499   34022 pod_ready.go:82] duration metric: took 400.770291ms for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.316508   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.511836   34022 request.go:632] Waited for 195.271471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:37:59.511920   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:37:59.511931   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.511939   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.511948   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.515136   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.712221   34022 request.go:632] Waited for 196.384821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:59.712289   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:59.712294   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.712302   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.712309   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.715391   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.716333   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:59.716356   34022 pod_ready.go:82] duration metric: took 399.841544ms for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.716375   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.912751   34022 request.go:632] Waited for 196.300793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:37:59.912870   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:37:59.912884   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.912894   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.912902   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.916551   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:00.112471   34022 request.go:632] Waited for 195.315992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:38:00.112520   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:38:00.112525   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.112532   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.112535   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.115509   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:38:00.116194   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:38:00.116211   34022 pod_ready.go:82] duration metric: took 399.824793ms for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:38:00.116221   34022 pod_ready.go:39] duration metric: took 3.200608197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:38:00.116243   34022 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:38:00.116294   34022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:38:00.135868   34022 api_server.go:72] duration metric: took 21.497115723s to wait for apiserver process to appear ...
	I0927 00:38:00.135895   34022 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:38:00.135917   34022 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0927 00:38:00.140183   34022 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0927 00:38:00.140253   34022 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0927 00:38:00.140266   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.140276   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.140279   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.141056   34022 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0927 00:38:00.141139   34022 api_server.go:141] control plane version: v1.31.1
	I0927 00:38:00.141154   34022 api_server.go:131] duration metric: took 5.252594ms to wait for apiserver health ...
	I0927 00:38:00.141160   34022 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:38:00.312479   34022 request.go:632] Waited for 171.239847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.312534   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.312539   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.312546   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.312551   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.317803   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:38:00.322748   34022 system_pods.go:59] 17 kube-system pods found
	I0927 00:38:00.322780   34022 system_pods.go:61] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:38:00.322785   34022 system_pods.go:61] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:38:00.322788   34022 system_pods.go:61] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:38:00.322791   34022 system_pods.go:61] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:38:00.322794   34022 system_pods.go:61] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:38:00.322797   34022 system_pods.go:61] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:38:00.322800   34022 system_pods.go:61] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:38:00.322804   34022 system_pods.go:61] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:38:00.322807   34022 system_pods.go:61] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:38:00.322811   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:38:00.322814   34022 system_pods.go:61] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:38:00.322817   34022 system_pods.go:61] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:38:00.322819   34022 system_pods.go:61] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:38:00.322822   34022 system_pods.go:61] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:38:00.322826   34022 system_pods.go:61] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:38:00.322829   34022 system_pods.go:61] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:38:00.322832   34022 system_pods.go:61] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:38:00.322837   34022 system_pods.go:74] duration metric: took 181.672494ms to wait for pod list to return data ...
	I0927 00:38:00.322843   34022 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:38:00.512235   34022 request.go:632] Waited for 189.330159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:38:00.512297   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:38:00.512302   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.512309   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.512313   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.517819   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:38:00.518071   34022 default_sa.go:45] found service account: "default"
	I0927 00:38:00.518095   34022 default_sa.go:55] duration metric: took 195.245876ms for default service account to be created ...
	I0927 00:38:00.518107   34022 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:38:00.712113   34022 request.go:632] Waited for 193.916786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.712176   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.712183   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.712193   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.712199   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.716946   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:00.721442   34022 system_pods.go:86] 17 kube-system pods found
	I0927 00:38:00.721467   34022 system_pods.go:89] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:38:00.721472   34022 system_pods.go:89] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:38:00.721476   34022 system_pods.go:89] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:38:00.721479   34022 system_pods.go:89] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:38:00.721482   34022 system_pods.go:89] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:38:00.721486   34022 system_pods.go:89] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:38:00.721489   34022 system_pods.go:89] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:38:00.721493   34022 system_pods.go:89] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:38:00.721496   34022 system_pods.go:89] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:38:00.721500   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:38:00.721503   34022 system_pods.go:89] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:38:00.721506   34022 system_pods.go:89] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:38:00.721510   34022 system_pods.go:89] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:38:00.721512   34022 system_pods.go:89] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:38:00.721515   34022 system_pods.go:89] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:38:00.721518   34022 system_pods.go:89] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:38:00.721520   34022 system_pods.go:89] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:38:00.721525   34022 system_pods.go:126] duration metric: took 203.413353ms to wait for k8s-apps to be running ...
	I0927 00:38:00.721531   34022 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:38:00.721569   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:00.736846   34022 system_svc.go:56] duration metric: took 15.307058ms WaitForService to wait for kubelet
	I0927 00:38:00.736868   34022 kubeadm.go:582] duration metric: took 22.09812477s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:38:00.736883   34022 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:38:00.912548   34022 request.go:632] Waited for 175.604909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0927 00:38:00.912614   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0927 00:38:00.912620   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.912629   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.912637   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.916934   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:00.918457   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:38:00.918481   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:38:00.918495   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:38:00.918500   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:38:00.918505   34022 node_conditions.go:105] duration metric: took 181.617208ms to run NodePressure ...
	I0927 00:38:00.918514   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:38:00.918536   34022 start.go:255] writing updated cluster config ...
	I0927 00:38:00.920669   34022 out.go:201] 
	I0927 00:38:00.922354   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:00.922437   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:00.924101   34022 out.go:177] * Starting "ha-631834-m03" control-plane node in "ha-631834" cluster
	I0927 00:38:00.925280   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:38:00.925296   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:38:00.925400   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:38:00.925413   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:38:00.925494   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:00.925653   34022 start.go:360] acquireMachinesLock for ha-631834-m03: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:38:00.925710   34022 start.go:364] duration metric: took 40.934µs to acquireMachinesLock for "ha-631834-m03"
	I0927 00:38:00.925731   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:00.925834   34022 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0927 00:38:00.927492   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:38:00.927590   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:00.927628   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:00.942435   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0927 00:38:00.942900   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:00.943351   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:00.943370   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:00.943711   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:00.943853   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:00.943978   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:00.944142   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:38:00.944167   34022 client.go:168] LocalClient.Create starting
	I0927 00:38:00.944197   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:38:00.944234   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:38:00.944249   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:38:00.944293   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:38:00.944314   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:38:00.944324   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:38:00.944337   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:38:00.944345   34022 main.go:141] libmachine: (ha-631834-m03) Calling .PreCreateCheck
	I0927 00:38:00.944509   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:00.944854   34022 main.go:141] libmachine: Creating machine...
	I0927 00:38:00.944866   34022 main.go:141] libmachine: (ha-631834-m03) Calling .Create
	I0927 00:38:00.945006   34022 main.go:141] libmachine: (ha-631834-m03) Creating KVM machine...
	I0927 00:38:00.946130   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found existing default KVM network
	I0927 00:38:00.946246   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found existing private KVM network mk-ha-631834
	I0927 00:38:00.946370   34022 main.go:141] libmachine: (ha-631834-m03) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 ...
	I0927 00:38:00.946396   34022 main.go:141] libmachine: (ha-631834-m03) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:38:00.946450   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:00.946342   34779 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:38:00.946538   34022 main.go:141] libmachine: (ha-631834-m03) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:38:01.172256   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.172126   34779 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa...
	I0927 00:38:01.300878   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.300754   34779 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/ha-631834-m03.rawdisk...
	I0927 00:38:01.300913   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Writing magic tar header
	I0927 00:38:01.300930   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Writing SSH key tar header
	I0927 00:38:01.300947   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.300907   34779 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 ...
	I0927 00:38:01.301077   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03
	I0927 00:38:01.301177   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:38:01.301201   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 (perms=drwx------)
	I0927 00:38:01.301210   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:38:01.301221   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:38:01.301229   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:38:01.301238   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:38:01.301243   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home
	I0927 00:38:01.301252   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Skipping /home - not owner
	I0927 00:38:01.301261   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:38:01.301272   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:38:01.301340   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:38:01.301369   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:38:01.301385   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:38:01.301397   34022 main.go:141] libmachine: (ha-631834-m03) Creating domain...
	I0927 00:38:01.302347   34022 main.go:141] libmachine: (ha-631834-m03) define libvirt domain using xml: 
	I0927 00:38:01.302369   34022 main.go:141] libmachine: (ha-631834-m03) <domain type='kvm'>
	I0927 00:38:01.302379   34022 main.go:141] libmachine: (ha-631834-m03)   <name>ha-631834-m03</name>
	I0927 00:38:01.302387   34022 main.go:141] libmachine: (ha-631834-m03)   <memory unit='MiB'>2200</memory>
	I0927 00:38:01.302396   34022 main.go:141] libmachine: (ha-631834-m03)   <vcpu>2</vcpu>
	I0927 00:38:01.302403   34022 main.go:141] libmachine: (ha-631834-m03)   <features>
	I0927 00:38:01.302416   34022 main.go:141] libmachine: (ha-631834-m03)     <acpi/>
	I0927 00:38:01.302423   34022 main.go:141] libmachine: (ha-631834-m03)     <apic/>
	I0927 00:38:01.302428   34022 main.go:141] libmachine: (ha-631834-m03)     <pae/>
	I0927 00:38:01.302434   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302439   34022 main.go:141] libmachine: (ha-631834-m03)   </features>
	I0927 00:38:01.302446   34022 main.go:141] libmachine: (ha-631834-m03)   <cpu mode='host-passthrough'>
	I0927 00:38:01.302451   34022 main.go:141] libmachine: (ha-631834-m03)   
	I0927 00:38:01.302457   34022 main.go:141] libmachine: (ha-631834-m03)   </cpu>
	I0927 00:38:01.302482   34022 main.go:141] libmachine: (ha-631834-m03)   <os>
	I0927 00:38:01.302504   34022 main.go:141] libmachine: (ha-631834-m03)     <type>hvm</type>
	I0927 00:38:01.302517   34022 main.go:141] libmachine: (ha-631834-m03)     <boot dev='cdrom'/>
	I0927 00:38:01.302528   34022 main.go:141] libmachine: (ha-631834-m03)     <boot dev='hd'/>
	I0927 00:38:01.302541   34022 main.go:141] libmachine: (ha-631834-m03)     <bootmenu enable='no'/>
	I0927 00:38:01.302550   34022 main.go:141] libmachine: (ha-631834-m03)   </os>
	I0927 00:38:01.302558   34022 main.go:141] libmachine: (ha-631834-m03)   <devices>
	I0927 00:38:01.302567   34022 main.go:141] libmachine: (ha-631834-m03)     <disk type='file' device='cdrom'>
	I0927 00:38:01.302594   34022 main.go:141] libmachine: (ha-631834-m03)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/boot2docker.iso'/>
	I0927 00:38:01.302616   34022 main.go:141] libmachine: (ha-631834-m03)       <target dev='hdc' bus='scsi'/>
	I0927 00:38:01.302629   34022 main.go:141] libmachine: (ha-631834-m03)       <readonly/>
	I0927 00:38:01.302639   34022 main.go:141] libmachine: (ha-631834-m03)     </disk>
	I0927 00:38:01.302651   34022 main.go:141] libmachine: (ha-631834-m03)     <disk type='file' device='disk'>
	I0927 00:38:01.302663   34022 main.go:141] libmachine: (ha-631834-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:38:01.302681   34022 main.go:141] libmachine: (ha-631834-m03)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/ha-631834-m03.rawdisk'/>
	I0927 00:38:01.302695   34022 main.go:141] libmachine: (ha-631834-m03)       <target dev='hda' bus='virtio'/>
	I0927 00:38:01.302706   34022 main.go:141] libmachine: (ha-631834-m03)     </disk>
	I0927 00:38:01.302713   34022 main.go:141] libmachine: (ha-631834-m03)     <interface type='network'>
	I0927 00:38:01.302718   34022 main.go:141] libmachine: (ha-631834-m03)       <source network='mk-ha-631834'/>
	I0927 00:38:01.302725   34022 main.go:141] libmachine: (ha-631834-m03)       <model type='virtio'/>
	I0927 00:38:01.302733   34022 main.go:141] libmachine: (ha-631834-m03)     </interface>
	I0927 00:38:01.302743   34022 main.go:141] libmachine: (ha-631834-m03)     <interface type='network'>
	I0927 00:38:01.302756   34022 main.go:141] libmachine: (ha-631834-m03)       <source network='default'/>
	I0927 00:38:01.302769   34022 main.go:141] libmachine: (ha-631834-m03)       <model type='virtio'/>
	I0927 00:38:01.302780   34022 main.go:141] libmachine: (ha-631834-m03)     </interface>
	I0927 00:38:01.302786   34022 main.go:141] libmachine: (ha-631834-m03)     <serial type='pty'>
	I0927 00:38:01.302798   34022 main.go:141] libmachine: (ha-631834-m03)       <target port='0'/>
	I0927 00:38:01.302806   34022 main.go:141] libmachine: (ha-631834-m03)     </serial>
	I0927 00:38:01.302811   34022 main.go:141] libmachine: (ha-631834-m03)     <console type='pty'>
	I0927 00:38:01.302824   34022 main.go:141] libmachine: (ha-631834-m03)       <target type='serial' port='0'/>
	I0927 00:38:01.302835   34022 main.go:141] libmachine: (ha-631834-m03)     </console>
	I0927 00:38:01.302846   34022 main.go:141] libmachine: (ha-631834-m03)     <rng model='virtio'>
	I0927 00:38:01.302853   34022 main.go:141] libmachine: (ha-631834-m03)       <backend model='random'>/dev/random</backend>
	I0927 00:38:01.302860   34022 main.go:141] libmachine: (ha-631834-m03)     </rng>
	I0927 00:38:01.302867   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302871   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302876   34022 main.go:141] libmachine: (ha-631834-m03)   </devices>
	I0927 00:38:01.302885   34022 main.go:141] libmachine: (ha-631834-m03) </domain>
	I0927 00:38:01.302891   34022 main.go:141] libmachine: (ha-631834-m03) 
	I0927 00:38:01.309656   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4f:aa:cd in network default
	I0927 00:38:01.310171   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring networks are active...
	I0927 00:38:01.310187   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:01.310859   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring network default is active
	I0927 00:38:01.311183   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring network mk-ha-631834 is active
	I0927 00:38:01.311550   34022 main.go:141] libmachine: (ha-631834-m03) Getting domain xml...
	I0927 00:38:01.312351   34022 main.go:141] libmachine: (ha-631834-m03) Creating domain...
	I0927 00:38:02.542322   34022 main.go:141] libmachine: (ha-631834-m03) Waiting to get IP...
	I0927 00:38:02.542980   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:02.543377   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:02.543426   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:02.543365   34779 retry.go:31] will retry after 295.787312ms: waiting for machine to come up
	I0927 00:38:02.840874   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:02.841334   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:02.841363   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:02.841297   34779 retry.go:31] will retry after 248.489193ms: waiting for machine to come up
	I0927 00:38:03.091718   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:03.092118   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:03.092144   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:03.092091   34779 retry.go:31] will retry after 441.574448ms: waiting for machine to come up
	I0927 00:38:03.535897   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:03.536373   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:03.536426   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:03.536344   34779 retry.go:31] will retry after 516.671192ms: waiting for machine to come up
	I0927 00:38:04.054938   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:04.055415   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:04.055448   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:04.055376   34779 retry.go:31] will retry after 716.952406ms: waiting for machine to come up
	I0927 00:38:04.774184   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:04.774597   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:04.774626   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:04.774544   34779 retry.go:31] will retry after 932.879879ms: waiting for machine to come up
	I0927 00:38:05.710264   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:05.710744   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:05.710771   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:05.710689   34779 retry.go:31] will retry after 865.055707ms: waiting for machine to come up
	I0927 00:38:06.577372   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:06.577736   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:06.577763   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:06.577713   34779 retry.go:31] will retry after 1.070388843s: waiting for machine to come up
	I0927 00:38:07.649656   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:07.650114   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:07.650136   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:07.650079   34779 retry.go:31] will retry after 1.328681925s: waiting for machine to come up
	I0927 00:38:08.980362   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:08.980901   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:08.980930   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:08.980854   34779 retry.go:31] will retry after 1.891343357s: waiting for machine to come up
	I0927 00:38:10.874136   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:10.874597   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:10.874626   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:10.874547   34779 retry.go:31] will retry after 1.77968387s: waiting for machine to come up
	I0927 00:38:12.656346   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:12.656707   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:12.656734   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:12.656661   34779 retry.go:31] will retry after 2.690596335s: waiting for machine to come up
	I0927 00:38:15.349488   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:15.349902   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:15.349938   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:15.349838   34779 retry.go:31] will retry after 3.212522074s: waiting for machine to come up
	I0927 00:38:18.564307   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:18.564733   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:18.564759   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:18.564688   34779 retry.go:31] will retry after 5.536998184s: waiting for machine to come up
	I0927 00:38:24.105735   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.106267   34022 main.go:141] libmachine: (ha-631834-m03) Found IP for machine: 192.168.39.92
	I0927 00:38:24.106298   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has current primary IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.106307   34022 main.go:141] libmachine: (ha-631834-m03) Reserving static IP address...
	I0927 00:38:24.106789   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find host DHCP lease matching {name: "ha-631834-m03", mac: "52:54:00:4c:25:39", ip: "192.168.39.92"} in network mk-ha-631834
	I0927 00:38:24.178177   34022 main.go:141] libmachine: (ha-631834-m03) Reserved static IP address: 192.168.39.92
	I0927 00:38:24.178214   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Getting to WaitForSSH function...
	I0927 00:38:24.178222   34022 main.go:141] libmachine: (ha-631834-m03) Waiting for SSH to be available...
	I0927 00:38:24.180785   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.181172   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.181205   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.181352   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using SSH client type: external
	I0927 00:38:24.181375   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa (-rw-------)
	I0927 00:38:24.181402   34022 main.go:141] libmachine: (ha-631834-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:38:24.181416   34022 main.go:141] libmachine: (ha-631834-m03) DBG | About to run SSH command:
	I0927 00:38:24.181425   34022 main.go:141] libmachine: (ha-631834-m03) DBG | exit 0
	I0927 00:38:24.307152   34022 main.go:141] libmachine: (ha-631834-m03) DBG | SSH cmd err, output: <nil>: 
	I0927 00:38:24.307447   34022 main.go:141] libmachine: (ha-631834-m03) KVM machine creation complete!
	I0927 00:38:24.307763   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:24.308355   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:24.308580   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:24.308729   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:38:24.308741   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetState
	I0927 00:38:24.310053   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:38:24.310069   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:38:24.310082   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:38:24.310091   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.312140   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.312456   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.312481   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.312582   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.312762   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.312951   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.313095   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.313255   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.313466   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.313480   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:38:24.422933   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:38:24.422970   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:38:24.422980   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.426661   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.427100   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.427125   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.427318   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.427511   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.427638   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.427791   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.427987   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.428244   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.428263   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:38:24.540183   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:38:24.540244   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:38:24.540253   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:38:24.540261   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.540508   34022 buildroot.go:166] provisioning hostname "ha-631834-m03"
	I0927 00:38:24.540530   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.540689   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.543040   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.543414   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.543443   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.543611   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.543765   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.543907   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.544102   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.544311   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.544483   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.544499   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834-m03 && echo "ha-631834-m03" | sudo tee /etc/hostname
	I0927 00:38:24.670921   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834-m03
	
	I0927 00:38:24.670950   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.673565   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.673864   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.673890   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.674020   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.674183   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.674310   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.674419   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.674647   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.674798   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.674812   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:38:24.791979   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:38:24.792005   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:38:24.792027   34022 buildroot.go:174] setting up certificates
	I0927 00:38:24.792036   34022 provision.go:84] configureAuth start
	I0927 00:38:24.792044   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.792291   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:24.794829   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.795183   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.795216   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.795380   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.797351   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.797611   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.797635   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.797733   34022 provision.go:143] copyHostCerts
	I0927 00:38:24.797765   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:38:24.797804   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:38:24.797814   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:38:24.797876   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:38:24.797945   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:38:24.797964   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:38:24.797980   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:38:24.798015   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:38:24.798060   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:38:24.798079   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:38:24.798086   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:38:24.798115   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:38:24.798186   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834-m03 san=[127.0.0.1 192.168.39.92 ha-631834-m03 localhost minikube]
	I0927 00:38:24.887325   34022 provision.go:177] copyRemoteCerts
	I0927 00:38:24.887388   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:38:24.887417   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.889796   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.890201   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.890231   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.890378   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.890525   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.890673   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.890757   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:24.974577   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:38:24.974640   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:38:24.998800   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:38:24.998882   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:38:25.023015   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:38:25.023097   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 00:38:25.047091   34022 provision.go:87] duration metric: took 255.040854ms to configureAuth
	I0927 00:38:25.047129   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:38:25.047386   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:25.047470   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.050122   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.050450   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.050478   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.050639   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.050791   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.050936   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.051044   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.051180   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:25.051392   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:25.051410   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:38:25.271341   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:38:25.271367   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:38:25.271379   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetURL
	I0927 00:38:25.272505   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using libvirt version 6000000
	I0927 00:38:25.274516   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.274843   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.274868   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.275000   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:38:25.275010   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:38:25.275018   34022 client.go:171] duration metric: took 24.330841027s to LocalClient.Create
	I0927 00:38:25.275044   34022 start.go:167] duration metric: took 24.330903271s to libmachine.API.Create "ha-631834"
	I0927 00:38:25.275059   34022 start.go:293] postStartSetup for "ha-631834-m03" (driver="kvm2")
	I0927 00:38:25.275078   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:38:25.275102   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.275329   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:38:25.275358   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.277447   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.277789   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.277809   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.277981   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.278138   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.278294   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.278392   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.363118   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:38:25.367416   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:38:25.367440   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:38:25.367494   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:38:25.367565   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:38:25.367574   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:38:25.367651   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:38:25.377433   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:38:25.402022   34022 start.go:296] duration metric: took 126.949525ms for postStartSetup
	I0927 00:38:25.402069   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:25.402606   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:25.405298   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.405691   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.405718   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.406069   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:25.406300   34022 start.go:128] duration metric: took 24.480456335s to createHost
	I0927 00:38:25.406329   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.408691   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.409060   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.409076   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.409274   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.409443   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.409610   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.409745   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.409905   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:25.410111   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:25.410124   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:38:25.520084   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397505.498121645
	
	I0927 00:38:25.520105   34022 fix.go:216] guest clock: 1727397505.498121645
	I0927 00:38:25.520112   34022 fix.go:229] Guest: 2024-09-27 00:38:25.498121645 +0000 UTC Remote: 2024-09-27 00:38:25.406314622 +0000 UTC m=+144.706814205 (delta=91.807023ms)
	I0927 00:38:25.520126   34022 fix.go:200] guest clock delta is within tolerance: 91.807023ms
	I0927 00:38:25.520131   34022 start.go:83] releasing machines lock for "ha-631834-m03", held for 24.594409944s
	I0927 00:38:25.520153   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.520388   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:25.523018   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.523441   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.523469   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.525631   34022 out.go:177] * Found network options:
	I0927 00:38:25.527157   34022 out.go:177]   - NO_PROXY=192.168.39.4,192.168.39.184
	W0927 00:38:25.528442   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 00:38:25.528464   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:38:25.528477   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.528981   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.529153   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.529222   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:38:25.529262   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	W0927 00:38:25.529362   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 00:38:25.529390   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:38:25.529477   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:38:25.529503   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.532028   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532225   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532427   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.532453   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532602   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.532629   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.532655   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532783   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.532794   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.532975   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.532976   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.533132   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.533194   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.533378   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.772033   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:38:25.777746   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:38:25.777803   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:38:25.795383   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:38:25.795403   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:38:25.795486   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:38:25.812841   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:38:25.827240   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:38:25.827295   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:38:25.841149   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:38:25.855688   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:38:25.975549   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:38:26.132600   34022 docker.go:233] disabling docker service ...
	I0927 00:38:26.132671   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:38:26.147138   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:38:26.160283   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:38:26.280885   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:38:26.397744   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:38:26.412063   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:38:26.431067   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:38:26.431183   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.443586   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:38:26.443649   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.455922   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.466779   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.478101   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:38:26.489198   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.499613   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.517900   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.528412   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:38:26.537702   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:38:26.537761   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:38:26.550744   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:38:26.561809   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:26.685216   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:38:26.784033   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:38:26.784095   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:38:26.788971   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:38:26.789022   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:38:26.792579   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:38:26.834879   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:38:26.834941   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:38:26.863131   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:38:26.894968   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:38:26.896312   34022 out.go:177]   - env NO_PROXY=192.168.39.4
	I0927 00:38:26.897668   34022 out.go:177]   - env NO_PROXY=192.168.39.4,192.168.39.184
	I0927 00:38:26.898968   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:26.901618   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:26.901952   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:26.901974   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:26.902162   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:38:26.906490   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:38:26.920023   34022 mustload.go:65] Loading cluster: ha-631834
	I0927 00:38:26.920246   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:26.920507   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:26.920541   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:26.934985   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44565
	I0927 00:38:26.935403   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:26.935900   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:26.935918   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:26.936235   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:26.936414   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:38:26.937691   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:38:26.938068   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:26.938115   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:26.952338   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0927 00:38:26.952802   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:26.953261   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:26.953279   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:26.953560   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:26.953830   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:38:26.953987   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.92
	I0927 00:38:26.954001   34022 certs.go:194] generating shared ca certs ...
	I0927 00:38:26.954018   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:26.954172   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:38:26.954225   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:38:26.954237   34022 certs.go:256] generating profile certs ...
	I0927 00:38:26.954335   34022 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:38:26.954364   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea
	I0927 00:38:26.954384   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.184 192.168.39.92 192.168.39.254]
	I0927 00:38:27.144960   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea ...
	I0927 00:38:27.144988   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea: {Name:mk59d4f754d56457d5c6119e00c5a757fdf5824a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:27.145181   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea ...
	I0927 00:38:27.145196   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea: {Name:mkf2be3579ffd641dd346a6606b22a9fb2324402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:27.145291   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:38:27.145420   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:38:27.145538   34022 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:38:27.145552   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:38:27.145565   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:38:27.145577   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:38:27.145592   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:38:27.145605   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:38:27.145617   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:38:27.145628   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:38:27.163436   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:38:27.163551   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:38:27.163586   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:38:27.163596   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:38:27.163623   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:38:27.163645   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:38:27.163668   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:38:27.163704   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:38:27.163738   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.163752   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.163764   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.163800   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:38:27.166902   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:27.167258   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:38:27.167285   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:27.167436   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:38:27.167603   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:38:27.167715   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:38:27.167869   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:38:27.247589   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 00:38:27.254078   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 00:38:27.266588   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 00:38:27.270741   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 00:38:27.281840   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 00:38:27.286146   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 00:38:27.296457   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 00:38:27.300347   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0927 00:38:27.311070   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 00:38:27.316218   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 00:38:27.329482   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 00:38:27.338454   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 00:38:27.355258   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:38:27.382658   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:38:27.405893   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:38:27.428247   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:38:27.451705   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0927 00:38:27.476691   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:38:27.501660   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:38:27.524660   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:38:27.551018   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:38:27.574913   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:38:27.597697   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:38:27.619996   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 00:38:27.636789   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 00:38:27.653361   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 00:38:27.669541   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0927 00:38:27.686266   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 00:38:27.702940   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 00:38:27.720590   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 00:38:27.736937   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:38:27.742470   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:38:27.754273   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.758795   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.758847   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.764495   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:38:27.776262   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:38:27.787442   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.791854   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.791891   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.797397   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:38:27.808793   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:38:27.819765   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.823906   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.823953   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.829381   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:38:27.840376   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:38:27.844373   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:38:27.844420   34022 kubeadm.go:934] updating node {m03 192.168.39.92 8443 v1.31.1 crio true true} ...
	I0927 00:38:27.844516   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:38:27.844551   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:38:27.844579   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:38:27.862311   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:38:27.862375   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:38:27.862434   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:38:27.872781   34022 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 00:38:27.872832   34022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 00:38:27.882613   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0927 00:38:27.882653   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:27.882614   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0927 00:38:27.882718   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:38:27.882614   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 00:38:27.882757   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:38:27.882780   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:38:27.882851   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:38:27.898547   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 00:38:27.898582   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 00:38:27.898586   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:38:27.898611   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 00:38:27.898635   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 00:38:27.898671   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:38:27.928975   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 00:38:27.929019   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 00:38:28.755845   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 00:38:28.766166   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 00:38:28.784929   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:38:28.802956   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 00:38:28.819722   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:38:28.823558   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:38:28.836368   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:28.952315   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:38:28.969758   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:38:28.970098   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:28.970147   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:28.986122   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0927 00:38:28.986560   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:28.987020   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:28.987038   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:28.987386   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:28.987567   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:38:28.987723   34022 start.go:317] joinCluster: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:38:28.987854   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 00:38:28.987874   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:38:28.991221   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:28.991756   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:38:28.991779   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:28.991933   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:38:28.992065   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:38:28.992196   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:38:28.992330   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:38:29.166799   34022 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:29.166840   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nyp4wh.a7l7uv1svmghw4iw --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m03 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443"
	I0927 00:38:50.894049   34022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nyp4wh.a7l7uv1svmghw4iw --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m03 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443": (21.727186901s)
	I0927 00:38:50.894086   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 00:38:51.430363   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834-m03 minikube.k8s.io/updated_at=2024_09_27T00_38_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=false
	I0927 00:38:51.580467   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-631834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 00:38:51.702639   34022 start.go:319] duration metric: took 22.714914062s to joinCluster
	I0927 00:38:51.702703   34022 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:51.703011   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:51.703981   34022 out.go:177] * Verifying Kubernetes components...
	I0927 00:38:51.706308   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:51.993118   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:38:52.039442   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:38:52.039732   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 00:38:52.039793   34022 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0927 00:38:52.040085   34022 node_ready.go:35] waiting up to 6m0s for node "ha-631834-m03" to be "Ready" ...
	I0927 00:38:52.040186   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:52.040198   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:52.040211   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:52.040218   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:52.044122   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:52.540842   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:52.540865   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:52.540875   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:52.540880   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:52.544531   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:53.040343   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:53.040364   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:53.040376   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:53.040380   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:53.043889   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:53.540829   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:53.540853   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:53.540865   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:53.540871   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:53.544102   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:54.040457   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:54.040486   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:54.040498   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:54.040508   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:54.044080   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:54.044692   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:54.540544   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:54.540565   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:54.540577   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:54.540583   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:54.544108   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:55.040995   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:55.041014   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:55.041022   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:55.041026   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:55.044186   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:55.541131   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:55.541149   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:55.541155   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:55.541159   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:55.544421   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:56.040678   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:56.040699   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:56.040717   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:56.040724   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:56.044252   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:56.044964   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:56.540268   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:56.540298   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:56.540320   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:56.540326   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:56.544327   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:57.041238   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:57.041258   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:57.041266   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:57.041270   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:57.044588   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:57.541127   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:57.541150   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:57.541158   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:57.541162   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:57.545682   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:58.040341   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:58.040358   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:58.040365   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:58.040370   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:58.044102   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:58.541229   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:58.541250   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:58.541260   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:58.541266   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:58.545253   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:58.545941   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:59.040786   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:59.040810   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:59.040821   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:59.040826   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:59.044532   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:59.540476   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:59.540500   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:59.540512   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:59.540518   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:59.546237   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:00.040296   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:00.040324   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:00.040333   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:00.040340   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:00.043125   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:00.541170   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:00.541190   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:00.541199   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:00.541204   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:00.544199   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:01.041077   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:01.041108   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:01.041120   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:01.041128   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:01.044323   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:01.044952   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:01.540257   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:01.540278   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:01.540286   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:01.540290   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:01.543567   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:02.040508   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:02.040527   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:02.040534   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:02.040538   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:02.043399   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:02.540909   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:02.540930   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:02.540940   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:02.540944   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:02.544479   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.040484   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:03.040506   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:03.040516   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:03.040524   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:03.043891   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.540961   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:03.540985   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:03.540998   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:03.541004   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:03.544529   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.545350   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:04.041102   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:04.041123   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:04.041131   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:04.041135   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:04.046364   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:04.541106   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:04.541126   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:04.541134   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:04.541143   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:04.546084   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:05.040284   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:05.040305   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:05.040316   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:05.040321   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:05.044656   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:05.540520   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:05.540541   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:05.540549   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:05.540553   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:05.543933   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:06.040933   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:06.040960   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:06.040968   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:06.040972   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:06.044262   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:06.045234   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:06.540620   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:06.540642   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:06.540650   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:06.540655   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:06.543993   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:07.040742   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:07.040762   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:07.040769   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:07.040773   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:07.044207   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:07.541217   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:07.541238   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:07.541246   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:07.541250   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:07.544549   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:08.040522   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:08.040543   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:08.040551   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:08.040555   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:08.044379   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:08.540580   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:08.540599   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:08.540610   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:08.540614   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:08.543564   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:08.544141   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:09.041048   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:09.041080   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:09.041090   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:09.041096   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:09.044654   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:09.540899   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:09.540923   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:09.540933   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:09.540937   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:09.544281   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.040837   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:10.040856   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:10.040864   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:10.040868   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:10.044767   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.540532   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:10.540551   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:10.540558   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:10.540560   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:10.543816   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.544420   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:11.041033   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.041053   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.041062   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.041066   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.044226   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.044735   34022 node_ready.go:49] node "ha-631834-m03" has status "Ready":"True"
	I0927 00:39:11.044751   34022 node_ready.go:38] duration metric: took 19.004641333s for node "ha-631834-m03" to be "Ready" ...
	I0927 00:39:11.044759   34022 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:39:11.044826   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:11.044836   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.044843   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.044847   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.050350   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:11.057101   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.057173   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-479dv
	I0927 00:39:11.057179   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.057186   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.057192   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.059921   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.060545   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.060562   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.060568   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.060571   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.063003   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.063383   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.063397   34022 pod_ready.go:82] duration metric: took 6.275685ms for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.063405   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.063458   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kg8kf
	I0927 00:39:11.063466   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.063472   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.063477   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.065828   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.066447   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.066464   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.066475   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.066480   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.068743   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.069387   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.069408   34022 pod_ready.go:82] duration metric: took 5.996652ms for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.069420   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.069482   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834
	I0927 00:39:11.069493   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.069502   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.069510   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.071542   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.072035   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.072047   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.072054   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.072059   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.074524   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.075087   34022 pod_ready.go:93] pod "etcd-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.075106   34022 pod_ready.go:82] duration metric: took 5.678675ms for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.075115   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.075158   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m02
	I0927 00:39:11.075166   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.075172   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.075177   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.077457   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.078140   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:11.078155   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.078162   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.078166   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.080308   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.080796   34022 pod_ready.go:93] pod "etcd-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.080816   34022 pod_ready.go:82] duration metric: took 5.694556ms for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.080827   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.241112   34022 request.go:632] Waited for 160.229406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m03
	I0927 00:39:11.241190   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m03
	I0927 00:39:11.241202   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.241213   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.241221   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.244515   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.441468   34022 request.go:632] Waited for 196.217118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.441557   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.441564   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.441575   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.441580   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.445651   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:11.446311   34022 pod_ready.go:93] pod "etcd-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.446338   34022 pod_ready.go:82] duration metric: took 365.498163ms for pod "etcd-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.446361   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.641363   34022 request.go:632] Waited for 194.923565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:39:11.641498   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:39:11.641520   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.641531   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.641539   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.646049   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:11.841994   34022 request.go:632] Waited for 195.392366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.842046   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.842053   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.842060   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.842064   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.845122   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.845566   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.845583   34022 pod_ready.go:82] duration metric: took 399.214359ms for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.845596   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.041393   34022 request.go:632] Waited for 195.729881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:39:12.041458   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:39:12.041466   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.041478   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.041488   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.044854   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.241780   34022 request.go:632] Waited for 196.198597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:12.241855   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:12.241862   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.241870   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.241880   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.245475   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.246124   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:12.246146   34022 pod_ready.go:82] duration metric: took 400.543035ms for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.246162   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.441106   34022 request.go:632] Waited for 194.872848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m03
	I0927 00:39:12.441163   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m03
	I0927 00:39:12.441169   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.441177   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.441181   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.444679   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.641949   34022 request.go:632] Waited for 196.340732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:12.642006   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:12.642011   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.642019   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.642026   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.645583   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.646336   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:12.646359   34022 pod_ready.go:82] duration metric: took 400.189129ms for pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.646371   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.841500   34022 request.go:632] Waited for 195.047763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:39:12.841554   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:39:12.841559   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.841565   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.841570   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.844885   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.042011   34022 request.go:632] Waited for 196.365336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:13.042068   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:13.042075   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.042086   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.042094   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.045463   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.046083   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.046099   34022 pod_ready.go:82] duration metric: took 399.717332ms for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.046117   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.241273   34022 request.go:632] Waited for 195.079725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:39:13.241342   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:39:13.241350   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.241360   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.241371   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.244557   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.441283   34022 request.go:632] Waited for 196.073724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:13.441336   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:13.441342   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.441348   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.441353   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.444943   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.445609   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.445625   34022 pod_ready.go:82] duration metric: took 399.502321ms for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.445635   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.641730   34022 request.go:632] Waited for 196.022446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m03
	I0927 00:39:13.641795   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m03
	I0927 00:39:13.641804   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.641816   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.641825   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.645301   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.841195   34022 request.go:632] Waited for 195.27161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:13.841276   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:13.841286   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.841298   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.841306   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.844228   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:13.844820   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.844837   34022 pod_ready.go:82] duration metric: took 399.196459ms for pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.844849   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-22lcj" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.041259   34022 request.go:632] Waited for 196.353447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22lcj
	I0927 00:39:14.041346   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22lcj
	I0927 00:39:14.041361   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.041372   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.041381   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.044594   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.241701   34022 request.go:632] Waited for 196.342418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:14.241756   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:14.241771   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.241779   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.241786   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.244937   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.245574   34022 pod_ready.go:93] pod "kube-proxy-22lcj" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:14.245593   34022 pod_ready.go:82] duration metric: took 400.737693ms for pod "kube-proxy-22lcj" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.245602   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.441662   34022 request.go:632] Waited for 195.987258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:39:14.441711   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:39:14.441717   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.441723   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.441727   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.444886   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.642030   34022 request.go:632] Waited for 196.372014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:14.642111   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:14.642118   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.642125   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.642129   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.645645   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.646260   34022 pod_ready.go:93] pod "kube-proxy-7n244" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:14.646278   34022 pod_ready.go:82] duration metric: took 400.670776ms for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.646288   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.841368   34022 request.go:632] Waited for 195.014242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:39:14.841454   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:39:14.841463   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.841470   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.841478   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.844791   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.041743   34022 request.go:632] Waited for 196.305022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.041798   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.041803   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.041810   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.041816   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.045475   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.045878   34022 pod_ready.go:93] pod "kube-proxy-x2hvh" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.045893   34022 pod_ready.go:82] duration metric: took 399.599097ms for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.045902   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.242003   34022 request.go:632] Waited for 196.041536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:39:15.242079   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:39:15.242093   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.242103   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.242113   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.246380   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:15.441144   34022 request.go:632] Waited for 194.281274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:15.441219   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:15.441224   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.441235   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.441240   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.444769   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.445492   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.445508   34022 pod_ready.go:82] duration metric: took 399.601315ms for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.445517   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.641668   34022 request.go:632] Waited for 196.083523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:39:15.641741   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:39:15.641746   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.641753   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.641757   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.645029   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.841624   34022 request.go:632] Waited for 196.133411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.841705   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.841713   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.841721   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.841725   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.845075   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.845562   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.845579   34022 pod_ready.go:82] duration metric: took 400.056155ms for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.845590   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:16.041217   34022 request.go:632] Waited for 195.564347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m03
	I0927 00:39:16.041293   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m03
	I0927 00:39:16.041302   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.041310   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.041316   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.044981   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.241893   34022 request.go:632] Waited for 196.354511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:16.241965   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:16.241973   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.241981   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.241990   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.245440   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.245881   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:16.245900   34022 pod_ready.go:82] duration metric: took 400.302015ms for pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:16.245911   34022 pod_ready.go:39] duration metric: took 5.201141408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:39:16.245931   34022 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:39:16.245980   34022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:39:16.264448   34022 api_server.go:72] duration metric: took 24.561705447s to wait for apiserver process to appear ...
	I0927 00:39:16.264471   34022 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:39:16.264489   34022 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0927 00:39:16.270998   34022 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0927 00:39:16.271071   34022 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0927 00:39:16.271077   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.271087   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.271098   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.272010   34022 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0927 00:39:16.272079   34022 api_server.go:141] control plane version: v1.31.1
	I0927 00:39:16.272094   34022 api_server.go:131] duration metric: took 7.617636ms to wait for apiserver health ...
	I0927 00:39:16.272101   34022 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:39:16.441376   34022 request.go:632] Waited for 169.205133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.441450   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.441459   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.441467   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.441472   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.447163   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:16.454723   34022 system_pods.go:59] 24 kube-system pods found
	I0927 00:39:16.454748   34022 system_pods.go:61] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:39:16.454753   34022 system_pods.go:61] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:39:16.454757   34022 system_pods.go:61] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:39:16.454760   34022 system_pods.go:61] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:39:16.454763   34022 system_pods.go:61] "etcd-ha-631834-m03" [f0a5e835-8705-4555-8b6b-0c7147d76543] Running
	I0927 00:39:16.454767   34022 system_pods.go:61] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:39:16.454770   34022 system_pods.go:61] "kindnet-r2qxd" [68a590ef-4e98-409e-8ce3-4d4e3f14ccc1] Running
	I0927 00:39:16.454773   34022 system_pods.go:61] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:39:16.454776   34022 system_pods.go:61] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:39:16.454779   34022 system_pods.go:61] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:39:16.454782   34022 system_pods.go:61] "kube-apiserver-ha-631834-m03" [b5978123-4be5-4547-9f7a-17471dd88209] Running
	I0927 00:39:16.454786   34022 system_pods.go:61] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:39:16.454790   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:39:16.454793   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m03" [ff5ac84f-5b97-45f7-8bc4-0def81f1a9de] Running
	I0927 00:39:16.454797   34022 system_pods.go:61] "kube-proxy-22lcj" [0bd00be4-643a-41b0-ba0b-3a13f95a3b45] Running
	I0927 00:39:16.454800   34022 system_pods.go:61] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:39:16.454804   34022 system_pods.go:61] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:39:16.454807   34022 system_pods.go:61] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:39:16.454810   34022 system_pods.go:61] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:39:16.454813   34022 system_pods.go:61] "kube-scheduler-ha-631834-m03" [48ea6dc3-fa35-4c78-8f49-f6cc2797f433] Running
	I0927 00:39:16.454816   34022 system_pods.go:61] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:39:16.454819   34022 system_pods.go:61] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:39:16.454822   34022 system_pods.go:61] "kube-vip-ha-631834-m03" [0ffe3c65-482c-49ce-a209-94414f2958b5] Running
	I0927 00:39:16.454828   34022 system_pods.go:61] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:39:16.454833   34022 system_pods.go:74] duration metric: took 182.725605ms to wait for pod list to return data ...
	I0927 00:39:16.454840   34022 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:39:16.641200   34022 request.go:632] Waited for 186.296503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:39:16.641254   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:39:16.641261   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.641270   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.641279   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.644742   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.644853   34022 default_sa.go:45] found service account: "default"
	I0927 00:39:16.644867   34022 default_sa.go:55] duration metric: took 190.018813ms for default service account to be created ...
	I0927 00:39:16.644874   34022 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:39:16.841127   34022 request.go:632] Waited for 196.190225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.841217   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.841226   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.841234   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.841242   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.846111   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:16.853202   34022 system_pods.go:86] 24 kube-system pods found
	I0927 00:39:16.853229   34022 system_pods.go:89] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:39:16.853235   34022 system_pods.go:89] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:39:16.853239   34022 system_pods.go:89] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:39:16.853243   34022 system_pods.go:89] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:39:16.853246   34022 system_pods.go:89] "etcd-ha-631834-m03" [f0a5e835-8705-4555-8b6b-0c7147d76543] Running
	I0927 00:39:16.853249   34022 system_pods.go:89] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:39:16.853253   34022 system_pods.go:89] "kindnet-r2qxd" [68a590ef-4e98-409e-8ce3-4d4e3f14ccc1] Running
	I0927 00:39:16.853256   34022 system_pods.go:89] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:39:16.853260   34022 system_pods.go:89] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:39:16.853263   34022 system_pods.go:89] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:39:16.853266   34022 system_pods.go:89] "kube-apiserver-ha-631834-m03" [b5978123-4be5-4547-9f7a-17471dd88209] Running
	I0927 00:39:16.853269   34022 system_pods.go:89] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:39:16.853273   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:39:16.853276   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m03" [ff5ac84f-5b97-45f7-8bc4-0def81f1a9de] Running
	I0927 00:39:16.853280   34022 system_pods.go:89] "kube-proxy-22lcj" [0bd00be4-643a-41b0-ba0b-3a13f95a3b45] Running
	I0927 00:39:16.853285   34022 system_pods.go:89] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:39:16.853288   34022 system_pods.go:89] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:39:16.853291   34022 system_pods.go:89] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:39:16.853297   34022 system_pods.go:89] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:39:16.853302   34022 system_pods.go:89] "kube-scheduler-ha-631834-m03" [48ea6dc3-fa35-4c78-8f49-f6cc2797f433] Running
	I0927 00:39:16.853305   34022 system_pods.go:89] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:39:16.853308   34022 system_pods.go:89] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:39:16.853311   34022 system_pods.go:89] "kube-vip-ha-631834-m03" [0ffe3c65-482c-49ce-a209-94414f2958b5] Running
	I0927 00:39:16.853314   34022 system_pods.go:89] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:39:16.853321   34022 system_pods.go:126] duration metric: took 208.44194ms to wait for k8s-apps to be running ...
	I0927 00:39:16.853329   34022 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:39:16.853371   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:39:16.870246   34022 system_svc.go:56] duration metric: took 16.907091ms WaitForService to wait for kubelet
	I0927 00:39:16.870275   34022 kubeadm.go:582] duration metric: took 25.167539771s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:39:16.870292   34022 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:39:17.041388   34022 request.go:632] Waited for 171.008016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0927 00:39:17.041444   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0927 00:39:17.041452   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:17.041462   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:17.041467   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:17.045727   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:17.046668   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046684   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046709   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046713   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046717   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046720   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046725   34022 node_conditions.go:105] duration metric: took 176.429276ms to run NodePressure ...
	I0927 00:39:17.046735   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:39:17.046755   34022 start.go:255] writing updated cluster config ...
	I0927 00:39:17.047027   34022 ssh_runner.go:195] Run: rm -f paused
	I0927 00:39:17.097240   34022 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:39:17.099385   34022 out.go:177] * Done! kubectl is now configured to use "ha-631834" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.615395868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397781615365184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1f9872b-a49a-4c2c-9c95-dd001ebe65bd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.615878078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c4d7ae3-4a48-4216-8324-82b0cc8b4974 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.615961876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c4d7ae3-4a48-4216-8324-82b0cc8b4974 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.616195977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c4d7ae3-4a48-4216-8324-82b0cc8b4974 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.655136735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a118eb0d-fe58-4c97-a006-051da6a27269 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.655284760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a118eb0d-fe58-4c97-a006-051da6a27269 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.656610543Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be15afa9-9d24-4854-8b5a-a4f620261b58 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.657046373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397781657024285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be15afa9-9d24-4854-8b5a-a4f620261b58 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.657575326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6e58d96-eec0-4ec0-805e-97958483f243 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.657644776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6e58d96-eec0-4ec0-805e-97958483f243 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.657913459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6e58d96-eec0-4ec0-805e-97958483f243 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.695003042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27503c90-c8a8-4e39-b2b1-3d20c38c2021 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.695093544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27503c90-c8a8-4e39-b2b1-3d20c38c2021 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.696780821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5cf6caa-85d6-4328-b1da-b91214818233 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.697277897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397781697255383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5cf6caa-85d6-4328-b1da-b91214818233 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.697900717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1d45714-5119-4b31-a1d5-2c5e31706f98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.697954621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1d45714-5119-4b31-a1d5-2c5e31706f98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.698753967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1d45714-5119-4b31-a1d5-2c5e31706f98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.749668303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84d4beb4-6893-4a62-bab7-e676ed3309c2 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.749742932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84d4beb4-6893-4a62-bab7-e676ed3309c2 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.751337416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eff697ac-d603-445a-992a-eb32fff1b6a0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.751746540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397781751724317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eff697ac-d603-445a-992a-eb32fff1b6a0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.752313527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1a8743d-9502-4852-a04d-0d8b40d2e8bc name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.752386458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1a8743d-9502-4852-a04d-0d8b40d2e8bc name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:01 ha-631834 crio[661]: time="2024-09-27 00:43:01.752605738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1a8743d-9502-4852-a04d-0d8b40d2e8bc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74dc20e31bc6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ebc71356fe886       busybox-7dff88458-hczmj
	f0d4e929a59ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   2cb3143c36c8e       coredns-7c65d6cfc9-479dv
	3c06ebd9099a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8f236d02ca028       coredns-7c65d6cfc9-kg8kf
	a9f2637b4124e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   399bb953593cc       storage-provisioner
	805b55d391308       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   7e2d35a1098a1       kindnet-l6ncl
	182f24ac501b7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   c0f5b32248925       kube-proxy-7n244
	555c7e8f6d518       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   710e2b00db178       kube-vip-ha-631834
	536c1c26f6d72       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   de8c10edafaa7       etcd-ha-631834
	5c88792788fc2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   74609d9fcf5f5       kube-scheduler-ha-631834
	aa717868fa66e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   4a215208b0ed2       kube-controller-manager-ha-631834
	5dcaba50a39a2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   8e73f2182b892       kube-apiserver-ha-631834
	
	
	==> coredns [3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930] <==
	[INFO] 10.244.1.2:33318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158302s
	[INFO] 10.244.1.2:38992 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210731s
	[INFO] 10.244.1.2:33288 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154244s
	[INFO] 10.244.2.2:52842 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181224s
	[INFO] 10.244.2.2:39802 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001542919s
	[INFO] 10.244.2.2:47825 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115718s
	[INFO] 10.244.2.2:38071 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153076s
	[INFO] 10.244.0.4:46433 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001871874s
	[INFO] 10.244.0.4:34697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054557s
	[INFO] 10.244.1.2:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014886s
	[INFO] 10.244.2.2:34064 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136896s
	[INFO] 10.244.0.4:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149012s
	[INFO] 10.244.0.4:40833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014405s
	[INFO] 10.244.0.4:44560 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077158s
	[INFO] 10.244.0.4:46143 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000171018s
	[INFO] 10.244.1.2:56595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249758s
	[INFO] 10.244.1.2:34731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198874s
	[INFO] 10.244.1.2:47614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132758s
	[INFO] 10.244.1.2:36248 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015406s
	[INFO] 10.244.2.2:34744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136863s
	[INFO] 10.244.2.2:34972 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094616s
	[INFO] 10.244.2.2:52746 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078955s
	[INFO] 10.244.0.4:39419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113274s
	[INFO] 10.244.0.4:59554 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106105s
	[INFO] 10.244.0.4:39476 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054775s
	
	
	==> coredns [f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427] <==
	[INFO] 10.244.0.4:52853 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001421962s
	[INFO] 10.244.0.4:51515 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000078302s
	[INFO] 10.244.1.2:35739 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003265682s
	[INFO] 10.244.1.2:48683 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000243904s
	[INFO] 10.244.1.2:60448 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000155544s
	[INFO] 10.244.1.2:49238 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002742907s
	[INFO] 10.244.1.2:42211 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125195s
	[INFO] 10.244.2.2:33655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213093s
	[INFO] 10.244.2.2:58995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171984s
	[INFO] 10.244.2.2:39964 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.2.2:60456 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227691s
	[INFO] 10.244.0.4:44954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086981s
	[INFO] 10.244.0.4:47547 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166142s
	[INFO] 10.244.0.4:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214916s
	[INFO] 10.244.0.4:52871 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284904s
	[INFO] 10.244.0.4:55577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216348s
	[INFO] 10.244.0.4:39280 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003939s
	[INFO] 10.244.1.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133643s
	[INFO] 10.244.1.2:60581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156682s
	[INFO] 10.244.1.2:47815 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000931s
	[INFO] 10.244.2.2:51419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149958s
	[INFO] 10.244.2.2:54004 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114296s
	[INFO] 10.244.2.2:50685 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087762s
	[INFO] 10.244.2.2:42257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189679s
	[INFO] 10.244.0.4:51433 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015471s
	
	
	==> describe nodes <==
	Name:               ha-631834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_36_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:42:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-631834
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c835097a3f3f47119274822a90643a61
	  System UUID:                c835097a-3f3f-4711-9274-822a90643a61
	  Boot ID:                    773a1f71-cccf-4b35-8274-d80167988c3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hczmj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 coredns-7c65d6cfc9-479dv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 coredns-7c65d6cfc9-kg8kf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 etcd-ha-631834                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m24s
	  kube-system                 kindnet-l6ncl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-631834             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-controller-manager-ha-631834    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-7n244                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-631834             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-vip-ha-631834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m17s  kube-proxy       
	  Normal  Starting                 6m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m24s  kubelet          Node ha-631834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s  kubelet          Node ha-631834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s  kubelet          Node ha-631834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m20s  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal  NodeReady                6m7s   kubelet          Node ha-631834 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal  RegisteredNode           4m6s   node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	
	
	Name:               ha-631834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_37_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:37:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:40:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-631834-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 949992430050476bb475912d3f8b70cc
	  System UUID:                94999243-0050-476b-b475-912d3f8b70cc
	  Boot ID:                    53eb24e2-e661-44e8-b798-be320838fb5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bkws6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-631834-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m25s
	  kube-system                 kindnet-x7kr9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m27s
	  kube-system                 kube-apiserver-ha-631834-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-631834-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-proxy-x2hvh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-scheduler-ha-631834-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-631834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-631834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  NodeNotReady             111s                   node-controller  Node ha-631834-m02 status is now: NodeNotReady
	
	
	Name:               ha-631834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_38_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:38:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:39:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-631834-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a890346e739943359cb952ef92382de4
	  System UUID:                a890346e-7399-4335-9cb9-52ef92382de4
	  Boot ID:                    8ca25526-4cfd-4aaa-ab8a-4e67ba42c0bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dhthf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-631834-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-r2qxd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m15s
	  kube-system                 kube-apiserver-ha-631834-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-ha-631834-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-proxy-22lcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-ha-631834-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-vip-ha-631834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m15s)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m15s)  kubelet          Node ha-631834-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x7 over 4m15s)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	
	
	Name:               ha-631834-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_39_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:39:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:42:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-631834-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d5a4987d2674227bf93c72f5a77697a
	  System UUID:                7d5a4987-d267-4227-bf93-c72f5a77697a
	  Boot ID:                    8a8b1cc4-fbfe-41cb-b018-a0e1cc80311a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-667b4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-klfbb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m8s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m8s)  kubelet          Node ha-631834-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m8s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-631834-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep27 00:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050412] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.794291] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.536823] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.593813] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.987708] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.063056] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056033] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.197880] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.118226] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.294623] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.981056] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.053805] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.059938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.871905] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.091402] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.727187] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.324064] kauditd_printk_skb: 41 callbacks suppressed
	[Sep27 00:37] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3] <==
	{"level":"warn","ts":"2024-09-27T00:43:01.835681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:01.929275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.005151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.015110Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.018831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.027490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.028265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.036813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.046159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.049796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.052445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.059988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.067406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.073388Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.077374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.080655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.086028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.092993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.099675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.103738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.107354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.111545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.117869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.123650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:02.128740Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:43:02 up 6 min,  0 users,  load average: 0.09, 0.23, 0.14
	Linux ha-631834 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327] <==
	I0927 00:42:25.603090       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:35.601340       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:42:35.601467       1 main.go:299] handling current node
	I0927 00:42:35.601518       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:42:35.601536       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:42:35.601669       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:42:35.601702       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:42:35.601776       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:42:35.601795       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:45.594144       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:42:45.594344       1 main.go:299] handling current node
	I0927 00:42:45.594373       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:42:45.594393       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:42:45.594565       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:42:45.594590       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:42:45.594654       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:42:45.594673       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:55.603184       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:42:55.603559       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:42:55.603878       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:42:55.604117       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:42:55.604402       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:42:55.605203       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:55.605426       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:42:55.605486       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db] <==
	W0927 00:36:37.440538       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0927 00:36:37.441493       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:36:37.445496       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 00:36:37.662456       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 00:36:38.560626       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 00:36:38.578403       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 00:36:38.587470       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 00:36:43.266579       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 00:36:43.419243       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0927 00:39:23.576104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42282: use of closed network connection
	E0927 00:39:23.771378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42288: use of closed network connection
	E0927 00:39:23.958682       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42312: use of closed network connection
	E0927 00:39:24.143404       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42328: use of closed network connection
	E0927 00:39:24.321615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42334: use of closed network connection
	E0927 00:39:24.507069       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42338: use of closed network connection
	E0927 00:39:24.675789       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42344: use of closed network connection
	E0927 00:39:24.862695       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42368: use of closed network connection
	E0927 00:39:25.041111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42388: use of closed network connection
	E0927 00:39:25.329470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42408: use of closed network connection
	E0927 00:39:25.500386       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42428: use of closed network connection
	E0927 00:39:25.675043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42456: use of closed network connection
	E0927 00:39:25.857940       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42472: use of closed network connection
	E0927 00:39:26.048116       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42494: use of closed network connection
	E0927 00:39:26.224537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42512: use of closed network connection
	W0927 00:40:47.323187       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4 192.168.39.92]
	
	
	==> kube-controller-manager [aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e] <==
	I0927 00:39:55.139474       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-631834-m04" podCIDRs=["10.244.3.0/24"]
	I0927 00:39:55.139580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.139638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.151590       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.487083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.877769       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:56.804153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:57.666169       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-631834-m04"
	I0927 00:39:57.666534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:57.746088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:58.632655       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:58.726762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:05.284426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:15.865636       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-631834-m04"
	I0927 00:40:15.865833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:15.879964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:16.781479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:25.730749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:41:11.808076       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-631834-m04"
	I0927 00:41:11.809299       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:11.832517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:11.890510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.873766ms"
	I0927 00:41:11.890734       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.505µs"
	I0927 00:41:12.743419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:17.028342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	
	
	==> kube-proxy [182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:36:44.513192       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:36:44.529245       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E0927 00:36:44.529395       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:36:44.637324       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:36:44.637425       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:36:44.637464       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:36:44.640935       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:36:44.641713       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:36:44.641798       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:36:44.643999       1 config.go:199] "Starting service config controller"
	I0927 00:36:44.644892       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:36:44.645302       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:36:44.645338       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:36:44.648337       1 config.go:328] "Starting node config controller"
	I0927 00:36:44.650849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:36:44.748412       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:36:44.748475       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:36:44.752495       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac] <==
	W0927 00:36:35.715895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:36:35.716591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.715936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:36:35.718435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.715973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:36:35.718562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.719580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:35.719853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.589565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:36.589679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.648438       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:36:36.648499       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:36:36.655529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:36:36.655821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.677521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:36:36.677870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.687963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:36.688163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.985650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:36:36.985711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0927 00:36:38.790470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 00:39:55.242771       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	E0927 00:39:55.242960       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 583b6ea7-5b96-43a8-9f06-70c031554c0e(kube-system/kindnet-7gjcd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7gjcd"
	E0927 00:39:55.243000       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" pod="kube-system/kindnet-7gjcd"
	I0927 00:39:55.243040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	
	
	==> kubelet <==
	Sep 27 00:41:38 ha-631834 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:41:38 ha-631834 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:41:38 ha-631834 kubelet[1309]: E0927 00:41:38.620020    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397698619762113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:38 ha-631834 kubelet[1309]: E0927 00:41:38.620049    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397698619762113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:48 ha-631834 kubelet[1309]: E0927 00:41:48.622830    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397708621937313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:48 ha-631834 kubelet[1309]: E0927 00:41:48.622875    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397708621937313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:58 ha-631834 kubelet[1309]: E0927 00:41:58.624102    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397718623839780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:58 ha-631834 kubelet[1309]: E0927 00:41:58.624145    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397718623839780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:08 ha-631834 kubelet[1309]: E0927 00:42:08.626464    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397728626075698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:08 ha-631834 kubelet[1309]: E0927 00:42:08.626520    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397728626075698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:18 ha-631834 kubelet[1309]: E0927 00:42:18.630268    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397738629150202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:18 ha-631834 kubelet[1309]: E0927 00:42:18.630612    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397738629150202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:28 ha-631834 kubelet[1309]: E0927 00:42:28.632510    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397748632150911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:28 ha-631834 kubelet[1309]: E0927 00:42:28.632817    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397748632150911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.503597    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:42:38 ha-631834 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.634672    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397758634392335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.634711    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397758634392335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:48 ha-631834 kubelet[1309]: E0927 00:42:48.636173    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397768635813162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:48 ha-631834 kubelet[1309]: E0927 00:42:48.636541    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397768635813162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:58 ha-631834 kubelet[1309]: E0927 00:42:58.638644    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397778638333338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:58 ha-631834 kubelet[1309]: E0927 00:42:58.638684    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397778638333338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-631834 -n ha-631834
helpers_test.go:261: (dbg) Run:  kubectl --context ha-631834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr: (4.070310461s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-631834 -n ha-631834
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 logs -n 25: (1.393390513s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834:/home/docker/cp-test_ha-631834-m03_ha-631834.txt                      |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834 sudo cat                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834.txt                                |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m04 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp testdata/cp-test.txt                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834:/home/docker/cp-test_ha-631834-m04_ha-631834.txt                      |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834 sudo cat                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834.txt                                |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03:/home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m03 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-631834 node stop m02 -v=7                                                    | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-631834 node start m02 -v=7                                                   | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:36:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:36:00.733270   34022 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:36:00.733561   34022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:00.733572   34022 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:00.733578   34022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:00.733765   34022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:36:00.734369   34022 out.go:352] Setting JSON to false
	I0927 00:36:00.735232   34022 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4706,"bootTime":1727392655,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:36:00.735334   34022 start.go:139] virtualization: kvm guest
	I0927 00:36:00.737562   34022 out.go:177] * [ha-631834] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:36:00.738940   34022 notify.go:220] Checking for updates...
	I0927 00:36:00.738971   34022 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:36:00.740322   34022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:36:00.741556   34022 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:00.742777   34022 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:00.744101   34022 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:36:00.745418   34022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:36:00.746900   34022 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:36:00.781665   34022 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 00:36:00.782952   34022 start.go:297] selected driver: kvm2
	I0927 00:36:00.782969   34022 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:36:00.782989   34022 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:36:00.784037   34022 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:36:00.784159   34022 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:36:00.799229   34022 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:36:00.799294   34022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:36:00.799639   34022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:36:00.799677   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:00.799725   34022 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0927 00:36:00.799740   34022 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:36:00.799811   34022 start.go:340] cluster config:
	{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0927 00:36:00.799933   34022 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:36:00.801666   34022 out.go:177] * Starting "ha-631834" primary control-plane node in "ha-631834" cluster
	I0927 00:36:00.802817   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:00.802860   34022 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:36:00.802872   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:36:00.802951   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:36:00.802964   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:36:00.803416   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:00.803442   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json: {Name:mk6367ac20858a15eb53ac7fa5c4186f9176d965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:00.803588   34022 start.go:360] acquireMachinesLock for ha-631834: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:36:00.803621   34022 start.go:364] duration metric: took 19.585µs to acquireMachinesLock for "ha-631834"
	I0927 00:36:00.803641   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:00.803696   34022 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 00:36:00.805235   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:36:00.805379   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:00.805413   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:00.819286   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I0927 00:36:00.819786   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:00.820338   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:00.820363   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:00.820724   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:00.820928   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:00.821048   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:00.821188   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:36:00.821209   34022 client.go:168] LocalClient.Create starting
	I0927 00:36:00.821241   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:36:00.821269   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:00.821289   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:00.821354   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:36:00.821378   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:00.821391   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:00.821430   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:36:00.821441   34022 main.go:141] libmachine: (ha-631834) Calling .PreCreateCheck
	I0927 00:36:00.821748   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:00.822055   34022 main.go:141] libmachine: Creating machine...
	I0927 00:36:00.822066   34022 main.go:141] libmachine: (ha-631834) Calling .Create
	I0927 00:36:00.822200   34022 main.go:141] libmachine: (ha-631834) Creating KVM machine...
	I0927 00:36:00.823422   34022 main.go:141] libmachine: (ha-631834) DBG | found existing default KVM network
	I0927 00:36:00.824110   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:00.823958   34045 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000122e20}
	I0927 00:36:00.824171   34022 main.go:141] libmachine: (ha-631834) DBG | created network xml: 
	I0927 00:36:00.824189   34022 main.go:141] libmachine: (ha-631834) DBG | <network>
	I0927 00:36:00.824198   34022 main.go:141] libmachine: (ha-631834) DBG |   <name>mk-ha-631834</name>
	I0927 00:36:00.824206   34022 main.go:141] libmachine: (ha-631834) DBG |   <dns enable='no'/>
	I0927 00:36:00.824216   34022 main.go:141] libmachine: (ha-631834) DBG |   
	I0927 00:36:00.824223   34022 main.go:141] libmachine: (ha-631834) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 00:36:00.824229   34022 main.go:141] libmachine: (ha-631834) DBG |     <dhcp>
	I0927 00:36:00.824234   34022 main.go:141] libmachine: (ha-631834) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 00:36:00.824245   34022 main.go:141] libmachine: (ha-631834) DBG |     </dhcp>
	I0927 00:36:00.824249   34022 main.go:141] libmachine: (ha-631834) DBG |   </ip>
	I0927 00:36:00.824253   34022 main.go:141] libmachine: (ha-631834) DBG |   
	I0927 00:36:00.824262   34022 main.go:141] libmachine: (ha-631834) DBG | </network>
	I0927 00:36:00.824270   34022 main.go:141] libmachine: (ha-631834) DBG | 
	I0927 00:36:00.829058   34022 main.go:141] libmachine: (ha-631834) DBG | trying to create private KVM network mk-ha-631834 192.168.39.0/24...
	I0927 00:36:00.893473   34022 main.go:141] libmachine: (ha-631834) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 ...
	I0927 00:36:00.893502   34022 main.go:141] libmachine: (ha-631834) DBG | private KVM network mk-ha-631834 192.168.39.0/24 created
	I0927 00:36:00.893514   34022 main.go:141] libmachine: (ha-631834) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:36:00.893569   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:00.893424   34045 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:00.893608   34022 main.go:141] libmachine: (ha-631834) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:36:01.131795   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.131690   34045 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa...
	I0927 00:36:01.270727   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.270595   34045 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/ha-631834.rawdisk...
	I0927 00:36:01.270761   34022 main.go:141] libmachine: (ha-631834) DBG | Writing magic tar header
	I0927 00:36:01.270787   34022 main.go:141] libmachine: (ha-631834) DBG | Writing SSH key tar header
	I0927 00:36:01.270801   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:01.270770   34045 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 ...
	I0927 00:36:01.270904   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834
	I0927 00:36:01.270938   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834 (perms=drwx------)
	I0927 00:36:01.270949   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:36:01.270966   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:01.270976   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:36:01.270986   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:36:01.270995   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:36:01.271007   34022 main.go:141] libmachine: (ha-631834) DBG | Checking permissions on dir: /home
	I0927 00:36:01.271032   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:36:01.271042   34022 main.go:141] libmachine: (ha-631834) DBG | Skipping /home - not owner
	I0927 00:36:01.271059   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:36:01.271072   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:36:01.271090   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:36:01.271101   34022 main.go:141] libmachine: (ha-631834) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:36:01.271119   34022 main.go:141] libmachine: (ha-631834) Creating domain...
	I0927 00:36:01.272173   34022 main.go:141] libmachine: (ha-631834) define libvirt domain using xml: 
	I0927 00:36:01.272191   34022 main.go:141] libmachine: (ha-631834) <domain type='kvm'>
	I0927 00:36:01.272198   34022 main.go:141] libmachine: (ha-631834)   <name>ha-631834</name>
	I0927 00:36:01.272206   34022 main.go:141] libmachine: (ha-631834)   <memory unit='MiB'>2200</memory>
	I0927 00:36:01.272211   34022 main.go:141] libmachine: (ha-631834)   <vcpu>2</vcpu>
	I0927 00:36:01.272217   34022 main.go:141] libmachine: (ha-631834)   <features>
	I0927 00:36:01.272224   34022 main.go:141] libmachine: (ha-631834)     <acpi/>
	I0927 00:36:01.272235   34022 main.go:141] libmachine: (ha-631834)     <apic/>
	I0927 00:36:01.272246   34022 main.go:141] libmachine: (ha-631834)     <pae/>
	I0927 00:36:01.272256   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272263   34022 main.go:141] libmachine: (ha-631834)   </features>
	I0927 00:36:01.272282   34022 main.go:141] libmachine: (ha-631834)   <cpu mode='host-passthrough'>
	I0927 00:36:01.272289   34022 main.go:141] libmachine: (ha-631834)   
	I0927 00:36:01.272293   34022 main.go:141] libmachine: (ha-631834)   </cpu>
	I0927 00:36:01.272297   34022 main.go:141] libmachine: (ha-631834)   <os>
	I0927 00:36:01.272301   34022 main.go:141] libmachine: (ha-631834)     <type>hvm</type>
	I0927 00:36:01.272307   34022 main.go:141] libmachine: (ha-631834)     <boot dev='cdrom'/>
	I0927 00:36:01.272319   34022 main.go:141] libmachine: (ha-631834)     <boot dev='hd'/>
	I0927 00:36:01.272332   34022 main.go:141] libmachine: (ha-631834)     <bootmenu enable='no'/>
	I0927 00:36:01.272343   34022 main.go:141] libmachine: (ha-631834)   </os>
	I0927 00:36:01.272353   34022 main.go:141] libmachine: (ha-631834)   <devices>
	I0927 00:36:01.272363   34022 main.go:141] libmachine: (ha-631834)     <disk type='file' device='cdrom'>
	I0927 00:36:01.272378   34022 main.go:141] libmachine: (ha-631834)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/boot2docker.iso'/>
	I0927 00:36:01.272388   34022 main.go:141] libmachine: (ha-631834)       <target dev='hdc' bus='scsi'/>
	I0927 00:36:01.272453   34022 main.go:141] libmachine: (ha-631834)       <readonly/>
	I0927 00:36:01.272477   34022 main.go:141] libmachine: (ha-631834)     </disk>
	I0927 00:36:01.272488   34022 main.go:141] libmachine: (ha-631834)     <disk type='file' device='disk'>
	I0927 00:36:01.272497   34022 main.go:141] libmachine: (ha-631834)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:36:01.272515   34022 main.go:141] libmachine: (ha-631834)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/ha-631834.rawdisk'/>
	I0927 00:36:01.272530   34022 main.go:141] libmachine: (ha-631834)       <target dev='hda' bus='virtio'/>
	I0927 00:36:01.272545   34022 main.go:141] libmachine: (ha-631834)     </disk>
	I0927 00:36:01.272560   34022 main.go:141] libmachine: (ha-631834)     <interface type='network'>
	I0927 00:36:01.272569   34022 main.go:141] libmachine: (ha-631834)       <source network='mk-ha-631834'/>
	I0927 00:36:01.272578   34022 main.go:141] libmachine: (ha-631834)       <model type='virtio'/>
	I0927 00:36:01.272589   34022 main.go:141] libmachine: (ha-631834)     </interface>
	I0927 00:36:01.272599   34022 main.go:141] libmachine: (ha-631834)     <interface type='network'>
	I0927 00:36:01.272607   34022 main.go:141] libmachine: (ha-631834)       <source network='default'/>
	I0927 00:36:01.272617   34022 main.go:141] libmachine: (ha-631834)       <model type='virtio'/>
	I0927 00:36:01.272638   34022 main.go:141] libmachine: (ha-631834)     </interface>
	I0927 00:36:01.272657   34022 main.go:141] libmachine: (ha-631834)     <serial type='pty'>
	I0927 00:36:01.272670   34022 main.go:141] libmachine: (ha-631834)       <target port='0'/>
	I0927 00:36:01.272680   34022 main.go:141] libmachine: (ha-631834)     </serial>
	I0927 00:36:01.272689   34022 main.go:141] libmachine: (ha-631834)     <console type='pty'>
	I0927 00:36:01.272711   34022 main.go:141] libmachine: (ha-631834)       <target type='serial' port='0'/>
	I0927 00:36:01.272724   34022 main.go:141] libmachine: (ha-631834)     </console>
	I0927 00:36:01.272736   34022 main.go:141] libmachine: (ha-631834)     <rng model='virtio'>
	I0927 00:36:01.272748   34022 main.go:141] libmachine: (ha-631834)       <backend model='random'>/dev/random</backend>
	I0927 00:36:01.272758   34022 main.go:141] libmachine: (ha-631834)     </rng>
	I0927 00:36:01.272767   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272773   34022 main.go:141] libmachine: (ha-631834)     
	I0927 00:36:01.272784   34022 main.go:141] libmachine: (ha-631834)   </devices>
	I0927 00:36:01.272793   34022 main.go:141] libmachine: (ha-631834) </domain>
	I0927 00:36:01.272813   34022 main.go:141] libmachine: (ha-631834) 
	I0927 00:36:01.276563   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:8c:cf:67 in network default
	I0927 00:36:01.277046   34022 main.go:141] libmachine: (ha-631834) Ensuring networks are active...
	I0927 00:36:01.277065   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:01.277664   34022 main.go:141] libmachine: (ha-631834) Ensuring network default is active
	I0927 00:36:01.277924   34022 main.go:141] libmachine: (ha-631834) Ensuring network mk-ha-631834 is active
	I0927 00:36:01.278421   34022 main.go:141] libmachine: (ha-631834) Getting domain xml...
	I0927 00:36:01.279045   34022 main.go:141] libmachine: (ha-631834) Creating domain...
	I0927 00:36:02.458607   34022 main.go:141] libmachine: (ha-631834) Waiting to get IP...
	I0927 00:36:02.459345   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.459714   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.459736   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.459698   34045 retry.go:31] will retry after 212.922851ms: waiting for machine to come up
	I0927 00:36:02.674121   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.674559   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.674578   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.674520   34045 retry.go:31] will retry after 258.802525ms: waiting for machine to come up
	I0927 00:36:02.934927   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:02.935352   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:02.935388   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:02.935333   34045 retry.go:31] will retry after 385.263435ms: waiting for machine to come up
	I0927 00:36:03.321940   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:03.322382   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:03.322457   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:03.322352   34045 retry.go:31] will retry after 458.033114ms: waiting for machine to come up
	I0927 00:36:03.782012   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:03.782379   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:03.782406   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:03.782329   34045 retry.go:31] will retry after 619.891619ms: waiting for machine to come up
	I0927 00:36:04.404184   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:04.404742   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:04.404769   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:04.404698   34045 retry.go:31] will retry after 668.661978ms: waiting for machine to come up
	I0927 00:36:05.074541   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:05.074956   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:05.074981   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:05.074931   34045 retry.go:31] will retry after 1.139973505s: waiting for machine to come up
	I0927 00:36:06.216868   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:06.217267   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:06.217283   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:06.217233   34045 retry.go:31] will retry after 1.161217409s: waiting for machine to come up
	I0927 00:36:07.380453   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:07.380855   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:07.380881   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:07.380831   34045 retry.go:31] will retry after 1.625874527s: waiting for machine to come up
	I0927 00:36:09.008452   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:09.008818   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:09.008846   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:09.008771   34045 retry.go:31] will retry after 1.776898319s: waiting for machine to come up
	I0927 00:36:10.787443   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:10.787818   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:10.787869   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:10.787802   34045 retry.go:31] will retry after 2.764791752s: waiting for machine to come up
	I0927 00:36:13.556224   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:13.556671   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:13.556691   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:13.556636   34045 retry.go:31] will retry after 2.903263764s: waiting for machine to come up
	I0927 00:36:16.461156   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:16.461600   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find current IP address of domain ha-631834 in network mk-ha-631834
	I0927 00:36:16.461623   34022 main.go:141] libmachine: (ha-631834) DBG | I0927 00:36:16.461567   34045 retry.go:31] will retry after 4.074333009s: waiting for machine to come up
	I0927 00:36:20.540756   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.541254   34022 main.go:141] libmachine: (ha-631834) Found IP for machine: 192.168.39.4
	I0927 00:36:20.541349   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has current primary IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.541373   34022 main.go:141] libmachine: (ha-631834) Reserving static IP address...
	I0927 00:36:20.541632   34022 main.go:141] libmachine: (ha-631834) DBG | unable to find host DHCP lease matching {name: "ha-631834", mac: "52:54:00:bc:09:a5", ip: "192.168.39.4"} in network mk-ha-631834
	I0927 00:36:20.614776   34022 main.go:141] libmachine: (ha-631834) DBG | Getting to WaitForSSH function...
	I0927 00:36:20.614808   34022 main.go:141] libmachine: (ha-631834) Reserved static IP address: 192.168.39.4
	I0927 00:36:20.614821   34022 main.go:141] libmachine: (ha-631834) Waiting for SSH to be available...
	I0927 00:36:20.617249   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.617621   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.617669   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.617792   34022 main.go:141] libmachine: (ha-631834) DBG | Using SSH client type: external
	I0927 00:36:20.617816   34022 main.go:141] libmachine: (ha-631834) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa (-rw-------)
	I0927 00:36:20.617844   34022 main.go:141] libmachine: (ha-631834) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:36:20.617868   34022 main.go:141] libmachine: (ha-631834) DBG | About to run SSH command:
	I0927 00:36:20.617881   34022 main.go:141] libmachine: (ha-631834) DBG | exit 0
	I0927 00:36:20.747285   34022 main.go:141] libmachine: (ha-631834) DBG | SSH cmd err, output: <nil>: 
	I0927 00:36:20.747567   34022 main.go:141] libmachine: (ha-631834) KVM machine creation complete!
	I0927 00:36:20.747871   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:20.748388   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:20.748565   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:20.748693   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:36:20.748716   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:20.749749   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:36:20.749770   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:36:20.749777   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:36:20.749785   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.751512   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.751780   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.751802   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.751906   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.752078   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.752231   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.752323   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.752604   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.752800   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.752812   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:36:20.862622   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:36:20.862650   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:36:20.862657   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.865244   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.865552   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.865577   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.865716   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.865945   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.866143   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.866275   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.866412   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.866570   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.866579   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:36:20.980090   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:36:20.980221   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:36:20.980236   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:36:20.980246   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:20.980486   34022 buildroot.go:166] provisioning hostname "ha-631834"
	I0927 00:36:20.980510   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:20.980686   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:20.982900   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.983180   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:20.983205   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:20.983320   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:20.983483   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.983596   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:20.983828   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:20.983972   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:20.984135   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:20.984146   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834 && echo "ha-631834" | sudo tee /etc/hostname
	I0927 00:36:21.110505   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834
	
	I0927 00:36:21.110541   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.113154   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.113483   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.113507   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.113696   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.113890   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.114053   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.114223   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.114372   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:21.114529   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:21.114543   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:36:21.236395   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:36:21.236427   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:36:21.236467   34022 buildroot.go:174] setting up certificates
	I0927 00:36:21.236480   34022 provision.go:84] configureAuth start
	I0927 00:36:21.236491   34022 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:36:21.236728   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:21.239154   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.239450   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.239489   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.239661   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.241898   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.242200   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.242217   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.242388   34022 provision.go:143] copyHostCerts
	I0927 00:36:21.242413   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:36:21.242453   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:36:21.242464   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:36:21.242539   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:36:21.242644   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:36:21.242668   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:36:21.242676   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:36:21.242718   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:36:21.242794   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:36:21.242826   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:36:21.242835   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:36:21.242869   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:36:21.242951   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834 san=[127.0.0.1 192.168.39.4 ha-631834 localhost minikube]
	I0927 00:36:21.481677   34022 provision.go:177] copyRemoteCerts
	I0927 00:36:21.481751   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:36:21.481779   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.484532   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.484907   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.484938   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.485150   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.485340   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.485466   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.485603   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:21.574275   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:36:21.574368   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:36:21.598740   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:36:21.598797   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0927 00:36:21.622342   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:36:21.622427   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 00:36:21.646827   34022 provision.go:87] duration metric: took 410.33255ms to configureAuth
	I0927 00:36:21.646853   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:36:21.647098   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:21.647240   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.650164   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.650494   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.650526   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.650702   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.650908   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.651062   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.651244   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.651427   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:21.651615   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:21.651635   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:36:21.880863   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:36:21.880887   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:36:21.880895   34022 main.go:141] libmachine: (ha-631834) Calling .GetURL
	I0927 00:36:21.882096   34022 main.go:141] libmachine: (ha-631834) DBG | Using libvirt version 6000000
	I0927 00:36:21.884523   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.884856   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.884898   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.885077   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:36:21.885091   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:36:21.885098   34022 client.go:171] duration metric: took 21.063880971s to LocalClient.Create
	I0927 00:36:21.885116   34022 start.go:167] duration metric: took 21.063936629s to libmachine.API.Create "ha-631834"
	I0927 00:36:21.885126   34022 start.go:293] postStartSetup for "ha-631834" (driver="kvm2")
	I0927 00:36:21.885144   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:36:21.885159   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:21.885420   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:36:21.885488   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:21.887537   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.887790   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:21.887814   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:21.887928   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:21.888084   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:21.888274   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:21.888404   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:21.975055   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:36:21.979759   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:36:21.979784   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:36:21.979851   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:36:21.979941   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:36:21.979953   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:36:21.980080   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:36:21.990531   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:36:22.014932   34022 start.go:296] duration metric: took 129.791559ms for postStartSetup
	I0927 00:36:22.015008   34022 main.go:141] libmachine: (ha-631834) Calling .GetConfigRaw
	I0927 00:36:22.015658   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:22.018265   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.018611   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.018639   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.018899   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:22.019096   34022 start.go:128] duration metric: took 21.215390892s to createHost
	I0927 00:36:22.019120   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.021302   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.021602   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.021623   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.021782   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.021953   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.022148   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.022286   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.022416   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:36:22.022581   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:36:22.022591   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:36:22.136170   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397382.093993681
	
	I0927 00:36:22.136192   34022 fix.go:216] guest clock: 1727397382.093993681
	I0927 00:36:22.136202   34022 fix.go:229] Guest: 2024-09-27 00:36:22.093993681 +0000 UTC Remote: 2024-09-27 00:36:22.019107365 +0000 UTC m=+21.319607179 (delta=74.886316ms)
	I0927 00:36:22.136269   34022 fix.go:200] guest clock delta is within tolerance: 74.886316ms
	I0927 00:36:22.136280   34022 start.go:83] releasing machines lock for "ha-631834", held for 21.332646091s
	I0927 00:36:22.136304   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.136563   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:22.139383   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.139736   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.139759   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.139946   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140424   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140576   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:22.140640   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:36:22.140680   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.140773   34022 ssh_runner.go:195] Run: cat /version.json
	I0927 00:36:22.140798   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:22.143090   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143433   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.143461   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143480   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143586   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.143765   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.143827   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:22.143847   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:22.143916   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.143997   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:22.144069   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:22.144133   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:22.144262   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:22.144408   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:22.243060   34022 ssh_runner.go:195] Run: systemctl --version
	I0927 00:36:22.259700   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:36:22.415956   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:36:22.422185   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:36:22.422251   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:36:22.438630   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:36:22.438655   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:36:22.438724   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:36:22.456456   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:36:22.471488   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:36:22.471543   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:36:22.486032   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:36:22.500571   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:36:22.621816   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:36:22.772846   34022 docker.go:233] disabling docker service ...
	I0927 00:36:22.772913   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:36:22.787944   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:36:22.801143   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:36:22.939572   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:36:23.057695   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:36:23.072091   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:36:23.090934   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:36:23.090997   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.101768   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:36:23.101839   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.112607   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.122981   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.133563   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:36:23.144443   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.155241   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.172932   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:36:23.184071   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:36:23.194018   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:36:23.194075   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:36:23.207498   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:36:23.216852   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:36:23.351326   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:36:23.449204   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:36:23.449280   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:36:23.454200   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:36:23.454262   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:36:23.458028   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:36:23.497638   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:36:23.497711   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:36:23.525615   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:36:23.555870   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:36:23.557109   34022 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:36:23.559689   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:23.559978   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:23.560009   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:23.560187   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:36:23.564687   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:36:23.577852   34022 kubeadm.go:883] updating cluster {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:36:23.577958   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:23.578011   34022 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:36:23.610284   34022 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 00:36:23.610361   34022 ssh_runner.go:195] Run: which lz4
	I0927 00:36:23.614339   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0927 00:36:23.614430   34022 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 00:36:23.618714   34022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 00:36:23.618740   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 00:36:24.972066   34022 crio.go:462] duration metric: took 1.357668477s to copy over tarball
	I0927 00:36:24.972137   34022 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 00:36:26.952440   34022 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.98028123s)
	I0927 00:36:26.952467   34022 crio.go:469] duration metric: took 1.9803713s to extract the tarball
	I0927 00:36:26.952477   34022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 00:36:26.990046   34022 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:36:27.038137   34022 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:36:27.038171   34022 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:36:27.038180   34022 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.1 crio true true} ...
	I0927 00:36:27.038337   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:36:27.038423   34022 ssh_runner.go:195] Run: crio config
	I0927 00:36:27.087406   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:27.087427   34022 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 00:36:27.087436   34022 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:36:27.087455   34022 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-631834 NodeName:ha-631834 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:36:27.087584   34022 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-631834"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:36:27.087605   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:36:27.087640   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:36:27.104338   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:36:27.104430   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:36:27.104475   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:36:27.114532   34022 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:36:27.114597   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 00:36:27.125576   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0927 00:36:27.143174   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:36:27.159783   34022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0927 00:36:27.177110   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0927 00:36:27.193945   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:36:27.197827   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:36:27.210366   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:36:27.336946   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:36:27.354991   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.4
	I0927 00:36:27.355012   34022 certs.go:194] generating shared ca certs ...
	I0927 00:36:27.355030   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.355205   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:36:27.355254   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:36:27.355267   34022 certs.go:256] generating profile certs ...
	I0927 00:36:27.355348   34022 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:36:27.355370   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt with IP's: []
	I0927 00:36:27.682062   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt ...
	I0927 00:36:27.682092   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt: {Name:mk8f3bba10f88a791b79bb763eef9fe3f7d34390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.682274   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key ...
	I0927 00:36:27.682289   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key: {Name:mk503d08fe6b48c31ea153960f6273dc934010ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.682389   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6
	I0927 00:36:27.682409   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.254]
	I0927 00:36:27.752883   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 ...
	I0927 00:36:27.752911   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6: {Name:mka090c8b2557cb246619f729c0272d8e73ab4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.753091   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6 ...
	I0927 00:36:27.753107   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6: {Name:mk32c435c509e1da50a9d54c9a27e1ed3da8b7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.753219   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.1230d0d6 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:36:27.753364   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.1230d0d6 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:36:27.753446   34022 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:36:27.753465   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt with IP's: []
	I0927 00:36:27.888870   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt ...
	I0927 00:36:27.888902   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt: {Name:mk428f3282cdd0b71edcb5a948cacf34b7f69074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.889093   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key ...
	I0927 00:36:27.889107   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key: {Name:mk092e7e928ba5ffe819bbe344c977ddad72812f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:27.889205   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:36:27.889223   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:36:27.889233   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:36:27.889246   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:36:27.889256   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:36:27.889266   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:36:27.889278   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:36:27.889288   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:36:27.889339   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:36:27.889372   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:36:27.889381   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:36:27.889401   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:36:27.889423   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:36:27.889452   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:36:27.889488   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:36:27.889514   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:36:27.889528   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:27.889540   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:36:27.890073   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:36:27.915212   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:36:27.938433   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:36:27.961704   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:36:27.985172   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 00:36:28.008248   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:36:28.031157   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:36:28.053875   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:36:28.077746   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:36:28.100790   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:36:28.126305   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:36:28.148839   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:36:28.165086   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:36:28.171319   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:36:28.183230   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.187750   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.187803   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:36:28.193649   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:36:28.204802   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:36:28.215518   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.219871   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.219914   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:36:28.225559   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:36:28.236534   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:36:28.247541   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.251956   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.252002   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:36:28.257569   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:36:28.268557   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:36:28.272624   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:36:28.272681   34022 kubeadm.go:392] StartCluster: {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:36:28.272765   34022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:36:28.272803   34022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:36:28.310788   34022 cri.go:89] found id: ""
	I0927 00:36:28.310863   34022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:36:28.321240   34022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:36:28.331038   34022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:36:28.340878   34022 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:36:28.340897   34022 kubeadm.go:157] found existing configuration files:
	
	I0927 00:36:28.340934   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:36:28.350170   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:36:28.350236   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:36:28.359911   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:36:28.369100   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:36:28.369152   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:36:28.378846   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:36:28.388020   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:36:28.388070   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:36:28.397520   34022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:36:28.406575   34022 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:36:28.406618   34022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:36:28.415973   34022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 00:36:28.517602   34022 kubeadm.go:310] W0927 00:36:28.474729     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:36:28.518499   34022 kubeadm.go:310] W0927 00:36:28.475845     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:36:28.620411   34022 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:36:39.196766   34022 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:36:39.196817   34022 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:36:39.196897   34022 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:36:39.197042   34022 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:36:39.197146   34022 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:36:39.197242   34022 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:36:39.198695   34022 out.go:235]   - Generating certificates and keys ...
	I0927 00:36:39.198783   34022 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:36:39.198874   34022 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:36:39.198967   34022 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:36:39.199046   34022 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:36:39.199135   34022 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:36:39.199205   34022 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:36:39.199287   34022 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:36:39.199453   34022 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-631834 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0927 00:36:39.199543   34022 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:36:39.199699   34022 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-631834 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0927 00:36:39.199796   34022 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:36:39.199890   34022 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:36:39.199953   34022 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:36:39.200035   34022 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:36:39.200121   34022 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:36:39.200212   34022 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:36:39.200291   34022 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:36:39.200372   34022 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:36:39.200439   34022 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:36:39.200531   34022 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:36:39.200632   34022 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:36:39.202948   34022 out.go:235]   - Booting up control plane ...
	I0927 00:36:39.203043   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:36:39.203122   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:36:39.203192   34022 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:36:39.203290   34022 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:36:39.203381   34022 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:36:39.203419   34022 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:36:39.203571   34022 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:36:39.203689   34022 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:36:39.203745   34022 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.136312ms
	I0927 00:36:39.203833   34022 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:36:39.203916   34022 kubeadm.go:310] [api-check] The API server is healthy after 5.885001913s
	I0927 00:36:39.204050   34022 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:36:39.204208   34022 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:36:39.204298   34022 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:36:39.204479   34022 kubeadm.go:310] [mark-control-plane] Marking the node ha-631834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:36:39.204542   34022 kubeadm.go:310] [bootstrap-token] Using token: a2inhk.us1mqrkt01ocu6ik
	I0927 00:36:39.205835   34022 out.go:235]   - Configuring RBAC rules ...
	I0927 00:36:39.205939   34022 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:36:39.206027   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:36:39.206203   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:36:39.206359   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:36:39.206513   34022 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:36:39.206623   34022 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:36:39.206783   34022 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:36:39.206841   34022 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:36:39.206903   34022 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:36:39.206913   34022 kubeadm.go:310] 
	I0927 00:36:39.206990   34022 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:36:39.207004   34022 kubeadm.go:310] 
	I0927 00:36:39.207128   34022 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:36:39.207138   34022 kubeadm.go:310] 
	I0927 00:36:39.207188   34022 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:36:39.207263   34022 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:36:39.207324   34022 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:36:39.207333   34022 kubeadm.go:310] 
	I0927 00:36:39.207377   34022 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:36:39.207383   34022 kubeadm.go:310] 
	I0927 00:36:39.207423   34022 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:36:39.207429   34022 kubeadm.go:310] 
	I0927 00:36:39.207471   34022 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:36:39.207543   34022 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:36:39.207603   34022 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:36:39.207611   34022 kubeadm.go:310] 
	I0927 00:36:39.207679   34022 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:36:39.207747   34022 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:36:39.207752   34022 kubeadm.go:310] 
	I0927 00:36:39.207858   34022 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a2inhk.us1mqrkt01ocu6ik \
	I0927 00:36:39.207978   34022 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 00:36:39.208009   34022 kubeadm.go:310] 	--control-plane 
	I0927 00:36:39.208024   34022 kubeadm.go:310] 
	I0927 00:36:39.208133   34022 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:36:39.208140   34022 kubeadm.go:310] 
	I0927 00:36:39.208217   34022 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a2inhk.us1mqrkt01ocu6ik \
	I0927 00:36:39.208329   34022 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 00:36:39.208342   34022 cni.go:84] Creating CNI manager for ""
	I0927 00:36:39.208348   34022 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 00:36:39.209742   34022 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 00:36:39.210824   34022 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 00:36:39.216482   34022 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 00:36:39.216498   34022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 00:36:39.238534   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 00:36:39.596628   34022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:36:39.596683   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:39.596724   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834 minikube.k8s.io/updated_at=2024_09_27T00_36_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=true
	I0927 00:36:39.626142   34022 ops.go:34] apiserver oom_adj: -16
	I0927 00:36:39.790024   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:40.291013   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:40.790408   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:41.290433   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:41.790624   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:42.290399   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:42.790081   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:43.290106   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:36:43.383411   34022 kubeadm.go:1113] duration metric: took 3.786772854s to wait for elevateKubeSystemPrivileges
	I0927 00:36:43.383449   34022 kubeadm.go:394] duration metric: took 15.110773171s to StartCluster
	I0927 00:36:43.383466   34022 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:43.383525   34022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:43.384159   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:36:43.384353   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:36:43.384357   34022 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:43.384379   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:36:43.384387   34022 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 00:36:43.384482   34022 addons.go:69] Setting storage-provisioner=true in profile "ha-631834"
	I0927 00:36:43.384503   34022 addons.go:234] Setting addon storage-provisioner=true in "ha-631834"
	I0927 00:36:43.384502   34022 addons.go:69] Setting default-storageclass=true in profile "ha-631834"
	I0927 00:36:43.384521   34022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-631834"
	I0927 00:36:43.384535   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:36:43.384567   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:43.384839   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.384866   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.384944   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.384960   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.399817   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0927 00:36:43.399897   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0927 00:36:43.400293   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.400363   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.400865   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.400886   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.401031   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.401063   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.401250   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.401432   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.401539   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.402075   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.402108   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.403551   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:36:43.403892   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 00:36:43.404454   34022 cert_rotation.go:140] Starting client certificate rotation controller
	I0927 00:36:43.404728   34022 addons.go:234] Setting addon default-storageclass=true in "ha-631834"
	I0927 00:36:43.404772   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:36:43.405147   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.405179   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.417112   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0927 00:36:43.417520   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.418127   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.418155   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.418477   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.418681   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.419924   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0927 00:36:43.420288   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.420380   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:43.420672   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.420688   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.420969   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.421504   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:43.421551   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:43.422256   34022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:36:43.423360   34022 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:36:43.423375   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:36:43.423389   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:43.426316   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.426764   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:43.426778   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.426969   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:43.427109   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:43.427219   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:43.427355   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:43.435962   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0927 00:36:43.436362   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:43.436730   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:43.436746   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:43.437076   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:43.437260   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:36:43.438594   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:36:43.438749   34022 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:36:43.438763   34022 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:36:43.438784   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:36:43.441264   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.441750   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:36:43.441794   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:36:43.441824   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:36:43.441923   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:36:43.442101   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:36:43.442225   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:36:43.549239   34022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:36:43.572279   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:36:43.662399   34022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:36:44.397951   34022 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 00:36:44.398036   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398060   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398143   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398170   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398344   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398359   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398368   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398374   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398388   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398402   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398409   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.398416   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.398649   34022 main.go:141] libmachine: (ha-631834) DBG | Closing plugin on server side
	I0927 00:36:44.398666   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398675   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398678   34022 main.go:141] libmachine: (ha-631834) DBG | Closing plugin on server side
	I0927 00:36:44.398694   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.398708   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.398760   34022 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 00:36:44.398784   34022 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 00:36:44.398889   34022 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0927 00:36:44.398901   34022 round_trippers.go:469] Request Headers:
	I0927 00:36:44.398911   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:36:44.398920   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:36:44.417589   34022 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0927 00:36:44.418067   34022 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0927 00:36:44.418079   34022 round_trippers.go:469] Request Headers:
	I0927 00:36:44.418087   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:36:44.418091   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:36:44.418095   34022 round_trippers.go:473]     Content-Type: application/json
	I0927 00:36:44.420490   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:36:44.420636   34022 main.go:141] libmachine: Making call to close driver server
	I0927 00:36:44.420647   34022 main.go:141] libmachine: (ha-631834) Calling .Close
	I0927 00:36:44.420904   34022 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:36:44.420921   34022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:36:44.422479   34022 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 00:36:44.423550   34022 addons.go:510] duration metric: took 1.039159873s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 00:36:44.423595   34022 start.go:246] waiting for cluster config update ...
	I0927 00:36:44.423613   34022 start.go:255] writing updated cluster config ...
	I0927 00:36:44.425272   34022 out.go:201] 
	I0927 00:36:44.426803   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:36:44.426894   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:44.428362   34022 out.go:177] * Starting "ha-631834-m02" control-plane node in "ha-631834" cluster
	I0927 00:36:44.429446   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:36:44.429473   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:36:44.429577   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:36:44.429598   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:36:44.429705   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:36:44.429910   34022 start.go:360] acquireMachinesLock for ha-631834-m02: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:36:44.429964   34022 start.go:364] duration metric: took 31.862µs to acquireMachinesLock for "ha-631834-m02"
	I0927 00:36:44.429988   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:36:44.430077   34022 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0927 00:36:44.431533   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:36:44.431627   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:36:44.431667   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:36:44.446949   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0927 00:36:44.447487   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:36:44.447999   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:36:44.448029   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:36:44.448325   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:36:44.448539   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:36:44.448658   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:36:44.448816   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:36:44.448842   34022 client.go:168] LocalClient.Create starting
	I0927 00:36:44.448876   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:36:44.448913   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:44.448937   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:44.449007   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:36:44.449034   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:36:44.449049   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:36:44.449076   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:36:44.449088   34022 main.go:141] libmachine: (ha-631834-m02) Calling .PreCreateCheck
	I0927 00:36:44.449246   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:36:44.449638   34022 main.go:141] libmachine: Creating machine...
	I0927 00:36:44.449653   34022 main.go:141] libmachine: (ha-631834-m02) Calling .Create
	I0927 00:36:44.449792   34022 main.go:141] libmachine: (ha-631834-m02) Creating KVM machine...
	I0927 00:36:44.451021   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found existing default KVM network
	I0927 00:36:44.451178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found existing private KVM network mk-ha-631834
	I0927 00:36:44.451353   34022 main.go:141] libmachine: (ha-631834-m02) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 ...
	I0927 00:36:44.451372   34022 main.go:141] libmachine: (ha-631834-m02) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:36:44.451445   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.451350   34386 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:44.451537   34022 main.go:141] libmachine: (ha-631834-m02) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:36:44.687379   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.687222   34386 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa...
	I0927 00:36:44.751062   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.750967   34386 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/ha-631834-m02.rawdisk...
	I0927 00:36:44.751087   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Writing magic tar header
	I0927 00:36:44.751100   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Writing SSH key tar header
	I0927 00:36:44.751178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:44.751110   34386 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 ...
	I0927 00:36:44.751293   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02
	I0927 00:36:44.751324   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:36:44.751344   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02 (perms=drwx------)
	I0927 00:36:44.751365   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:36:44.751378   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:36:44.751392   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:36:44.751400   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:36:44.751408   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:36:44.751425   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:36:44.751434   34022 main.go:141] libmachine: (ha-631834-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:36:44.751446   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:36:44.751456   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:36:44.751467   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Checking permissions on dir: /home
	I0927 00:36:44.751479   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Skipping /home - not owner
	I0927 00:36:44.751504   34022 main.go:141] libmachine: (ha-631834-m02) Creating domain...
	I0927 00:36:44.752461   34022 main.go:141] libmachine: (ha-631834-m02) define libvirt domain using xml: 
	I0927 00:36:44.752482   34022 main.go:141] libmachine: (ha-631834-m02) <domain type='kvm'>
	I0927 00:36:44.752492   34022 main.go:141] libmachine: (ha-631834-m02)   <name>ha-631834-m02</name>
	I0927 00:36:44.752511   34022 main.go:141] libmachine: (ha-631834-m02)   <memory unit='MiB'>2200</memory>
	I0927 00:36:44.752523   34022 main.go:141] libmachine: (ha-631834-m02)   <vcpu>2</vcpu>
	I0927 00:36:44.752535   34022 main.go:141] libmachine: (ha-631834-m02)   <features>
	I0927 00:36:44.752546   34022 main.go:141] libmachine: (ha-631834-m02)     <acpi/>
	I0927 00:36:44.752559   34022 main.go:141] libmachine: (ha-631834-m02)     <apic/>
	I0927 00:36:44.752569   34022 main.go:141] libmachine: (ha-631834-m02)     <pae/>
	I0927 00:36:44.752577   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.752583   34022 main.go:141] libmachine: (ha-631834-m02)   </features>
	I0927 00:36:44.752589   34022 main.go:141] libmachine: (ha-631834-m02)   <cpu mode='host-passthrough'>
	I0927 00:36:44.752594   34022 main.go:141] libmachine: (ha-631834-m02)   
	I0927 00:36:44.752600   34022 main.go:141] libmachine: (ha-631834-m02)   </cpu>
	I0927 00:36:44.752605   34022 main.go:141] libmachine: (ha-631834-m02)   <os>
	I0927 00:36:44.752611   34022 main.go:141] libmachine: (ha-631834-m02)     <type>hvm</type>
	I0927 00:36:44.752616   34022 main.go:141] libmachine: (ha-631834-m02)     <boot dev='cdrom'/>
	I0927 00:36:44.752620   34022 main.go:141] libmachine: (ha-631834-m02)     <boot dev='hd'/>
	I0927 00:36:44.752628   34022 main.go:141] libmachine: (ha-631834-m02)     <bootmenu enable='no'/>
	I0927 00:36:44.752632   34022 main.go:141] libmachine: (ha-631834-m02)   </os>
	I0927 00:36:44.752654   34022 main.go:141] libmachine: (ha-631834-m02)   <devices>
	I0927 00:36:44.752673   34022 main.go:141] libmachine: (ha-631834-m02)     <disk type='file' device='cdrom'>
	I0927 00:36:44.752682   34022 main.go:141] libmachine: (ha-631834-m02)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/boot2docker.iso'/>
	I0927 00:36:44.752691   34022 main.go:141] libmachine: (ha-631834-m02)       <target dev='hdc' bus='scsi'/>
	I0927 00:36:44.752724   34022 main.go:141] libmachine: (ha-631834-m02)       <readonly/>
	I0927 00:36:44.752759   34022 main.go:141] libmachine: (ha-631834-m02)     </disk>
	I0927 00:36:44.752770   34022 main.go:141] libmachine: (ha-631834-m02)     <disk type='file' device='disk'>
	I0927 00:36:44.752786   34022 main.go:141] libmachine: (ha-631834-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:36:44.752803   34022 main.go:141] libmachine: (ha-631834-m02)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/ha-631834-m02.rawdisk'/>
	I0927 00:36:44.752813   34022 main.go:141] libmachine: (ha-631834-m02)       <target dev='hda' bus='virtio'/>
	I0927 00:36:44.752824   34022 main.go:141] libmachine: (ha-631834-m02)     </disk>
	I0927 00:36:44.752834   34022 main.go:141] libmachine: (ha-631834-m02)     <interface type='network'>
	I0927 00:36:44.752846   34022 main.go:141] libmachine: (ha-631834-m02)       <source network='mk-ha-631834'/>
	I0927 00:36:44.752860   34022 main.go:141] libmachine: (ha-631834-m02)       <model type='virtio'/>
	I0927 00:36:44.752870   34022 main.go:141] libmachine: (ha-631834-m02)     </interface>
	I0927 00:36:44.752876   34022 main.go:141] libmachine: (ha-631834-m02)     <interface type='network'>
	I0927 00:36:44.752888   34022 main.go:141] libmachine: (ha-631834-m02)       <source network='default'/>
	I0927 00:36:44.752898   34022 main.go:141] libmachine: (ha-631834-m02)       <model type='virtio'/>
	I0927 00:36:44.752907   34022 main.go:141] libmachine: (ha-631834-m02)     </interface>
	I0927 00:36:44.752917   34022 main.go:141] libmachine: (ha-631834-m02)     <serial type='pty'>
	I0927 00:36:44.752929   34022 main.go:141] libmachine: (ha-631834-m02)       <target port='0'/>
	I0927 00:36:44.752939   34022 main.go:141] libmachine: (ha-631834-m02)     </serial>
	I0927 00:36:44.752949   34022 main.go:141] libmachine: (ha-631834-m02)     <console type='pty'>
	I0927 00:36:44.752960   34022 main.go:141] libmachine: (ha-631834-m02)       <target type='serial' port='0'/>
	I0927 00:36:44.752971   34022 main.go:141] libmachine: (ha-631834-m02)     </console>
	I0927 00:36:44.752984   34022 main.go:141] libmachine: (ha-631834-m02)     <rng model='virtio'>
	I0927 00:36:44.753001   34022 main.go:141] libmachine: (ha-631834-m02)       <backend model='random'>/dev/random</backend>
	I0927 00:36:44.753018   34022 main.go:141] libmachine: (ha-631834-m02)     </rng>
	I0927 00:36:44.753035   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.753047   34022 main.go:141] libmachine: (ha-631834-m02)     
	I0927 00:36:44.753059   34022 main.go:141] libmachine: (ha-631834-m02)   </devices>
	I0927 00:36:44.753068   34022 main.go:141] libmachine: (ha-631834-m02) </domain>
	I0927 00:36:44.753080   34022 main.go:141] libmachine: (ha-631834-m02) 
	I0927 00:36:44.759470   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:b2:c3:d6 in network default
	I0927 00:36:44.759943   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:44.759962   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring networks are active...
	I0927 00:36:44.760578   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring network default is active
	I0927 00:36:44.760849   34022 main.go:141] libmachine: (ha-631834-m02) Ensuring network mk-ha-631834 is active
	I0927 00:36:44.761213   34022 main.go:141] libmachine: (ha-631834-m02) Getting domain xml...
	I0927 00:36:44.761860   34022 main.go:141] libmachine: (ha-631834-m02) Creating domain...
	I0927 00:36:45.965093   34022 main.go:141] libmachine: (ha-631834-m02) Waiting to get IP...
	I0927 00:36:45.965811   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:45.966210   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:45.966250   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:45.966193   34386 retry.go:31] will retry after 219.366954ms: waiting for machine to come up
	I0927 00:36:46.187549   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.188001   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.188031   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.187959   34386 retry.go:31] will retry after 344.351684ms: waiting for machine to come up
	I0927 00:36:46.533384   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.533893   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.533918   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.533845   34386 retry.go:31] will retry after 436.44682ms: waiting for machine to come up
	I0927 00:36:46.971366   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:46.971845   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:46.971881   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:46.971792   34386 retry.go:31] will retry after 518.722723ms: waiting for machine to come up
	I0927 00:36:47.492370   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:47.492814   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:47.492836   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:47.492761   34386 retry.go:31] will retry after 458.476026ms: waiting for machine to come up
	I0927 00:36:47.952367   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:47.952947   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:47.952968   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:47.952905   34386 retry.go:31] will retry after 873.835695ms: waiting for machine to come up
	I0927 00:36:48.827782   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:48.828192   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:48.828221   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:48.828139   34386 retry.go:31] will retry after 1.00855597s: waiting for machine to come up
	I0927 00:36:49.838599   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:49.838959   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:49.838982   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:49.838927   34386 retry.go:31] will retry after 1.38923332s: waiting for machine to come up
	I0927 00:36:51.230578   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:51.231036   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:51.231061   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:51.231006   34386 retry.go:31] will retry after 1.140830763s: waiting for machine to come up
	I0927 00:36:52.373231   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:52.373666   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:52.373692   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:52.373621   34386 retry.go:31] will retry after 2.064225387s: waiting for machine to come up
	I0927 00:36:54.440421   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:54.440877   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:54.440901   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:54.440817   34386 retry.go:31] will retry after 2.699234582s: waiting for machine to come up
	I0927 00:36:57.141531   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:36:57.141923   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:36:57.141944   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:36:57.141879   34386 retry.go:31] will retry after 2.876736711s: waiting for machine to come up
	I0927 00:37:00.019979   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:00.020397   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:37:00.020415   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:37:00.020358   34386 retry.go:31] will retry after 2.739686124s: waiting for machine to come up
	I0927 00:37:02.761974   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:02.762423   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find current IP address of domain ha-631834-m02 in network mk-ha-631834
	I0927 00:37:02.762478   34022 main.go:141] libmachine: (ha-631834-m02) DBG | I0927 00:37:02.762348   34386 retry.go:31] will retry after 3.780270458s: waiting for machine to come up
	I0927 00:37:06.544970   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.545486   34022 main.go:141] libmachine: (ha-631834-m02) Found IP for machine: 192.168.39.184
	I0927 00:37:06.545515   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has current primary IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.545524   34022 main.go:141] libmachine: (ha-631834-m02) Reserving static IP address...
	I0927 00:37:06.545889   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find host DHCP lease matching {name: "ha-631834-m02", mac: "52:54:00:f9:6f:a2", ip: "192.168.39.184"} in network mk-ha-631834
	I0927 00:37:06.617028   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Getting to WaitForSSH function...
	I0927 00:37:06.617058   34022 main.go:141] libmachine: (ha-631834-m02) Reserved static IP address: 192.168.39.184
	I0927 00:37:06.617127   34022 main.go:141] libmachine: (ha-631834-m02) Waiting for SSH to be available...
	I0927 00:37:06.619198   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:06.619549   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834
	I0927 00:37:06.619573   34022 main.go:141] libmachine: (ha-631834-m02) DBG | unable to find defined IP address of network mk-ha-631834 interface with MAC address 52:54:00:f9:6f:a2
	I0927 00:37:06.619711   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH client type: external
	I0927 00:37:06.619738   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa (-rw-------)
	I0927 00:37:06.619767   34022 main.go:141] libmachine: (ha-631834-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:37:06.619784   34022 main.go:141] libmachine: (ha-631834-m02) DBG | About to run SSH command:
	I0927 00:37:06.619798   34022 main.go:141] libmachine: (ha-631834-m02) DBG | exit 0
	I0927 00:37:06.623260   34022 main.go:141] libmachine: (ha-631834-m02) DBG | SSH cmd err, output: exit status 255: 
	I0927 00:37:06.623273   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0927 00:37:06.623281   34022 main.go:141] libmachine: (ha-631834-m02) DBG | command : exit 0
	I0927 00:37:06.623290   34022 main.go:141] libmachine: (ha-631834-m02) DBG | err     : exit status 255
	I0927 00:37:06.623297   34022 main.go:141] libmachine: (ha-631834-m02) DBG | output  : 
	I0927 00:37:09.623967   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Getting to WaitForSSH function...
	I0927 00:37:09.626758   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.627251   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.627285   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.627413   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH client type: external
	I0927 00:37:09.627435   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa (-rw-------)
	I0927 00:37:09.627472   34022 main.go:141] libmachine: (ha-631834-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:37:09.627484   34022 main.go:141] libmachine: (ha-631834-m02) DBG | About to run SSH command:
	I0927 00:37:09.627495   34022 main.go:141] libmachine: (ha-631834-m02) DBG | exit 0
	I0927 00:37:09.751226   34022 main.go:141] libmachine: (ha-631834-m02) DBG | SSH cmd err, output: <nil>: 
	I0927 00:37:09.751504   34022 main.go:141] libmachine: (ha-631834-m02) KVM machine creation complete!
	I0927 00:37:09.751804   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:37:09.752329   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:09.752502   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:09.752645   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:37:09.752657   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetState
	I0927 00:37:09.753685   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:37:09.753695   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:37:09.753702   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:37:09.753707   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.755579   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.755850   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.755881   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.755998   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.756145   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.756274   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.756413   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.756589   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.756825   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.756839   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:37:09.854682   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:37:09.854708   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:37:09.854718   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.857509   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.857847   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.857874   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.857977   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.858161   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.858335   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.858490   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.858645   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.858795   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.858806   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:37:09.960162   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:37:09.960233   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:37:09.960242   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:37:09.960250   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:09.960507   34022 buildroot.go:166] provisioning hostname "ha-631834-m02"
	I0927 00:37:09.960550   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:09.960744   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:09.963548   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.963921   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:09.963943   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:09.964085   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:09.964256   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.964403   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:09.964542   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:09.964683   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:09.964874   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:09.964887   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834-m02 && echo "ha-631834-m02" | sudo tee /etc/hostname
	I0927 00:37:10.077518   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834-m02
	
	I0927 00:37:10.077550   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.080178   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.080540   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.080573   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.080695   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.080848   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.080953   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.081049   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.081209   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.081417   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.081444   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:37:10.188307   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:37:10.188350   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:37:10.188371   34022 buildroot.go:174] setting up certificates
	I0927 00:37:10.188381   34022 provision.go:84] configureAuth start
	I0927 00:37:10.188395   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetMachineName
	I0927 00:37:10.188651   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.191227   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.191601   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.191637   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.191838   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.194575   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.195339   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.195365   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.195518   34022 provision.go:143] copyHostCerts
	I0927 00:37:10.195546   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:37:10.195575   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:37:10.195584   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:37:10.195648   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:37:10.195719   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:37:10.195736   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:37:10.195740   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:37:10.195763   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:37:10.195803   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:37:10.195819   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:37:10.195824   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:37:10.195844   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:37:10.195907   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834-m02 san=[127.0.0.1 192.168.39.184 ha-631834-m02 localhost minikube]
	I0927 00:37:10.245727   34022 provision.go:177] copyRemoteCerts
	I0927 00:37:10.245778   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:37:10.245798   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.248269   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.248597   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.248623   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.248784   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.248960   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.249076   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.249199   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.331285   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:37:10.331361   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:37:10.357400   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:37:10.357470   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:37:10.381613   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:37:10.381680   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:37:10.404641   34022 provision.go:87] duration metric: took 216.247596ms to configureAuth
	I0927 00:37:10.404666   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:37:10.404826   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:10.404895   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.407260   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.407584   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.407606   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.407813   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.407999   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.408158   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.408283   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.408456   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.408663   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.408684   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:37:10.641711   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:37:10.641732   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:37:10.641740   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetURL
	I0927 00:37:10.642949   34022 main.go:141] libmachine: (ha-631834-m02) DBG | Using libvirt version 6000000
	I0927 00:37:10.645171   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.645559   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.645584   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.645775   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:37:10.645789   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:37:10.645796   34022 client.go:171] duration metric: took 26.196945191s to LocalClient.Create
	I0927 00:37:10.645815   34022 start.go:167] duration metric: took 26.197002465s to libmachine.API.Create "ha-631834"
	I0927 00:37:10.645824   34022 start.go:293] postStartSetup for "ha-631834-m02" (driver="kvm2")
	I0927 00:37:10.645834   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:37:10.645850   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.646066   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:37:10.646101   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.648185   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.648596   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.648623   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.648794   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.648930   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.649065   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.649169   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.730488   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:37:10.734725   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:37:10.734745   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:37:10.734795   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:37:10.734865   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:37:10.734874   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:37:10.734948   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:37:10.746203   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:37:10.770218   34022 start.go:296] duration metric: took 124.382795ms for postStartSetup
	I0927 00:37:10.770261   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetConfigRaw
	I0927 00:37:10.770829   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.773277   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.773651   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.773680   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.773884   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:37:10.774086   34022 start.go:128] duration metric: took 26.343999443s to createHost
	I0927 00:37:10.774110   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.775957   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.776258   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.776284   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.776391   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.776554   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.776671   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.776790   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.776904   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:37:10.777080   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0927 00:37:10.777095   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:37:10.876642   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397430.856709211
	
	I0927 00:37:10.876668   34022 fix.go:216] guest clock: 1727397430.856709211
	I0927 00:37:10.876675   34022 fix.go:229] Guest: 2024-09-27 00:37:10.856709211 +0000 UTC Remote: 2024-09-27 00:37:10.774098108 +0000 UTC m=+70.074597703 (delta=82.611103ms)
	I0927 00:37:10.876688   34022 fix.go:200] guest clock delta is within tolerance: 82.611103ms
	I0927 00:37:10.876693   34022 start.go:83] releasing machines lock for "ha-631834-m02", held for 26.446717018s
	I0927 00:37:10.876711   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.876935   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:10.879789   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.880133   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.880157   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.882420   34022 out.go:177] * Found network options:
	I0927 00:37:10.883855   34022 out.go:177]   - NO_PROXY=192.168.39.4
	W0927 00:37:10.885148   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:37:10.885174   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885627   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885793   34022 main.go:141] libmachine: (ha-631834-m02) Calling .DriverName
	I0927 00:37:10.885874   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:37:10.885914   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	W0927 00:37:10.885995   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:37:10.886064   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:37:10.886085   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHHostname
	I0927 00:37:10.888528   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888647   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888905   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.888931   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.888961   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:10.888976   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:10.889083   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.889235   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHPort
	I0927 00:37:10.889256   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.889362   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHKeyPath
	I0927 00:37:10.889427   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.889490   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetSSHUsername
	I0927 00:37:10.889571   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:10.889594   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m02/id_rsa Username:docker}
	I0927 00:37:11.136304   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:37:11.142079   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:37:11.142147   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:37:11.158578   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:37:11.158606   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:37:11.158676   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:37:11.174779   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:37:11.188680   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:37:11.188733   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:37:11.201858   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:37:11.214760   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:37:11.327367   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:37:11.490795   34022 docker.go:233] disabling docker service ...
	I0927 00:37:11.490853   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:37:11.505571   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:37:11.518373   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:37:11.629152   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:37:11.740768   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:37:11.754787   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:37:11.773038   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:37:11.773110   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.783470   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:37:11.783521   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.793940   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.804039   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.814196   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:37:11.824547   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.834569   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.850743   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:37:11.861436   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:37:11.870606   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:37:11.870649   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:37:11.885756   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:37:11.897194   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:12.020445   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:37:12.107882   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:37:12.107937   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:37:12.113014   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:37:12.113056   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:37:12.116696   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:37:12.156627   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:37:12.156716   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:37:12.184776   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:37:12.214285   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:37:12.215642   34022 out.go:177]   - env NO_PROXY=192.168.39.4
	I0927 00:37:12.216858   34022 main.go:141] libmachine: (ha-631834-m02) Calling .GetIP
	I0927 00:37:12.219534   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:12.219884   34022 main.go:141] libmachine: (ha-631834-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:6f:a2", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:59 +0000 UTC Type:0 Mac:52:54:00:f9:6f:a2 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-631834-m02 Clientid:01:52:54:00:f9:6f:a2}
	I0927 00:37:12.219910   34022 main.go:141] libmachine: (ha-631834-m02) DBG | domain ha-631834-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:f9:6f:a2 in network mk-ha-631834
	I0927 00:37:12.220066   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:37:12.224146   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:37:12.236530   34022 mustload.go:65] Loading cluster: ha-631834
	I0927 00:37:12.236743   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:12.236988   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:12.237013   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:12.251316   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0927 00:37:12.251795   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:12.252245   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:12.252265   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:12.252568   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:12.252747   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:37:12.254195   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:37:12.254474   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:12.254499   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:12.268676   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45197
	I0927 00:37:12.269168   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:12.269589   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:12.269610   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:12.269894   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:12.270042   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:37:12.270195   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.184
	I0927 00:37:12.270209   34022 certs.go:194] generating shared ca certs ...
	I0927 00:37:12.270227   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.270367   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:37:12.270424   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:37:12.270437   34022 certs.go:256] generating profile certs ...
	I0927 00:37:12.270535   34022 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:37:12.270563   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f
	I0927 00:37:12.270582   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.184 192.168.39.254]
	I0927 00:37:12.380622   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f ...
	I0927 00:37:12.380651   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f: {Name:mkabbfeb402264582fd8eeda0c7047e582633f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.380811   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f ...
	I0927 00:37:12.380824   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f: {Name:mkfa43c1b86669a0c9318db325b03ab1136e574e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:37:12.380891   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.2787ab8f -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:37:12.381022   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.2787ab8f -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:37:12.381184   34022 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:37:12.381199   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:37:12.381212   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:37:12.381225   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:37:12.381237   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:37:12.381255   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:37:12.381268   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:37:12.381280   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:37:12.381292   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:37:12.381342   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:37:12.381368   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:37:12.381377   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:37:12.381397   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:37:12.381429   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:37:12.381449   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:37:12.381485   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:37:12.381525   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.381538   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.381559   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:37:12.381589   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:37:12.384914   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:12.385337   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:37:12.385363   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:12.385520   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:37:12.385695   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:37:12.385849   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:37:12.385970   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:37:12.463600   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 00:37:12.469050   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 00:37:12.480901   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 00:37:12.485274   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 00:37:12.495588   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 00:37:12.499742   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 00:37:12.511921   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 00:37:12.515813   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0927 00:37:12.525592   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 00:37:12.529819   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 00:37:12.540367   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 00:37:12.544115   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 00:37:12.559955   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:37:12.585679   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:37:12.608898   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:37:12.631565   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:37:12.654159   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 00:37:12.677901   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:37:12.701023   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:37:12.723805   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:37:12.746428   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:37:12.770481   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:37:12.794514   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:37:12.817381   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 00:37:12.833441   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 00:37:12.849543   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 00:37:12.866255   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0927 00:37:12.882530   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 00:37:12.898460   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 00:37:12.914236   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 00:37:12.929892   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:37:12.935443   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:37:12.945938   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.950422   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.950473   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:37:12.956276   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:37:12.967207   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:37:12.978472   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.982807   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.982859   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:37:12.988439   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:37:12.999183   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:37:13.010278   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.014700   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.014750   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:37:13.020522   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:37:13.032168   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:37:13.036252   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:37:13.036310   34022 kubeadm.go:934] updating node {m02 192.168.39.184 8443 v1.31.1 crio true true} ...
	I0927 00:37:13.036391   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:37:13.036418   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:37:13.036450   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:37:13.053748   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:37:13.053813   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:37:13.053866   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:37:13.063832   34022 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 00:37:13.063894   34022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 00:37:13.073341   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 00:37:13.073367   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:37:13.073425   34022 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0927 00:37:13.073468   34022 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0927 00:37:13.073430   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:37:13.077722   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 00:37:13.077745   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 00:37:14.061924   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:14.080321   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:37:14.080396   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:37:14.084997   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 00:37:14.085031   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 00:37:14.368132   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:37:14.368235   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:37:14.380382   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 00:37:14.380424   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 00:37:14.663959   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 00:37:14.673981   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 00:37:14.690872   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:37:14.708362   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 00:37:14.725181   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:37:14.729204   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:37:14.741822   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:14.857927   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:37:14.875145   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:37:14.875529   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:37:14.875570   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:37:14.890402   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0927 00:37:14.890838   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:37:14.891373   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:37:14.891394   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:37:14.891729   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:37:14.891911   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:37:14.892044   34022 start.go:317] joinCluster: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:37:14.892172   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 00:37:14.892194   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:37:14.894983   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:14.895381   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:37:14.895416   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:37:14.895524   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:37:14.895647   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:37:14.895747   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:37:14.895865   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:37:15.056944   34022 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:37:15.056990   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mlxu9z.6ua5c3whncxwr8h0 --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443"
	I0927 00:37:37.826684   34022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mlxu9z.6ua5c3whncxwr8h0 --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443": (22.769665782s)
	I0927 00:37:37.826721   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 00:37:38.375369   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834-m02 minikube.k8s.io/updated_at=2024_09_27T00_37_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=false
	I0927 00:37:38.497089   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-631834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 00:37:38.638589   34022 start.go:319] duration metric: took 23.746539088s to joinCluster
	I0927 00:37:38.638713   34022 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:37:38.638954   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:37:38.640009   34022 out.go:177] * Verifying Kubernetes components...
	I0927 00:37:38.641589   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:37:38.888956   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:37:38.910605   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:37:38.910930   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 00:37:38.911023   34022 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0927 00:37:38.911358   34022 node_ready.go:35] waiting up to 6m0s for node "ha-631834-m02" to be "Ready" ...
	I0927 00:37:38.911504   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:38.911518   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:38.911531   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:38.911540   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:38.925042   34022 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0927 00:37:39.412340   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:39.412364   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:39.412376   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:39.412382   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:39.415703   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:39.912301   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:39.912323   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:39.912335   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:39.912340   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:39.917016   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:37:40.411994   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:40.412018   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:40.412030   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:40.412034   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:40.415279   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:40.912076   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:40.912093   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:40.912101   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:40.912106   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:40.915241   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:40.915920   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:41.412300   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:41.412322   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:41.412334   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:41.412339   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:41.416161   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:41.912228   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:41.912252   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:41.912262   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:41.912271   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:41.915784   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:42.411624   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:42.411645   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:42.411652   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:42.411658   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:42.415042   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:42.911632   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:42.911657   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:42.911669   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:42.911673   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:42.915043   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:43.412494   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:43.412511   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:43.412518   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:43.412521   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:43.416206   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:43.417057   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:43.912499   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:43.912518   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:43.912526   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:43.912531   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:43.916624   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:37:44.412544   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:44.412562   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:44.412569   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:44.412573   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:44.416020   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:44.912402   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:44.912423   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:44.912433   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:44.912437   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.001404   34022 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I0927 00:37:45.412218   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:45.412235   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:45.412242   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:45.412246   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.415114   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:45.911872   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:45.911892   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:45.911899   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:45.911903   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:45.915117   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:45.915711   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:46.412115   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:46.412135   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:46.412142   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:46.412147   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:46.415578   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:46.911759   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:46.911782   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:46.911789   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:46.911795   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:46.914976   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.411947   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:47.411969   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:47.411976   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:47.411981   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:47.415038   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.911959   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:47.911982   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:47.911994   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:47.911999   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:47.915156   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:47.915877   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:48.411937   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:48.411963   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:48.411972   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:48.411983   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:48.414801   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:48.911631   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:48.911652   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:48.911660   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:48.911665   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:48.914737   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:49.411675   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:49.411696   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:49.411704   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:49.411709   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:49.414697   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:49.911696   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:49.911715   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:49.911725   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:49.911731   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:49.914887   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:50.411769   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:50.411790   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:50.411797   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:50.411800   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:50.415046   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:50.415915   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:50.912247   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:50.912268   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:50.912275   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:50.912279   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:50.915493   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:51.412530   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:51.412551   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:51.412559   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:51.412562   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:51.415870   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:51.911834   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:51.911856   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:51.911863   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:51.911868   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:51.914920   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.411866   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:52.411886   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:52.411894   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:52.411897   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:52.415280   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.912337   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:52.912367   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:52.912379   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:52.912391   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:52.915440   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:52.916052   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:53.411693   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:53.411714   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:53.411722   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:53.411726   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:53.415015   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:53.912191   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:53.912210   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:53.912218   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:53.912222   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:53.914959   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:54.412320   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:54.412340   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:54.412348   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:54.412351   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:54.415317   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:54.911810   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:54.911833   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:54.911841   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:54.911844   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:54.914791   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:55.411928   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:55.411949   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:55.411957   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:55.411960   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:55.414926   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:55.415763   34022 node_ready.go:53] node "ha-631834-m02" has status "Ready":"False"
	I0927 00:37:55.911749   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:55.911770   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:55.911777   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:55.911781   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:55.915450   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.412537   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.412558   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.412566   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.412569   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.416170   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.911854   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.911874   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.911883   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.911887   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.914948   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.915561   34022 node_ready.go:49] node "ha-631834-m02" has status "Ready":"True"
	I0927 00:37:56.915579   34022 node_ready.go:38] duration metric: took 18.004197532s for node "ha-631834-m02" to be "Ready" ...
	I0927 00:37:56.915587   34022 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:37:56.915672   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:37:56.915682   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.915688   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.915691   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.928535   34022 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0927 00:37:56.934559   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.934630   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-479dv
	I0927 00:37:56.934641   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.934652   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.934657   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.938001   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:56.940808   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.940821   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.940828   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.940832   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.943740   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.944239   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.944253   34022 pod_ready.go:82] duration metric: took 9.674838ms for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.944261   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.944310   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kg8kf
	I0927 00:37:56.944318   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.944324   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.944332   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.946515   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.947127   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.947143   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.947150   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.947157   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.949055   34022 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 00:37:56.949993   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.950013   34022 pod_ready.go:82] duration metric: took 5.744559ms for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.950024   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.950083   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834
	I0927 00:37:56.950095   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.950105   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.950113   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.952861   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.953382   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:56.953398   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.953408   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.953415   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.955580   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.955956   34022 pod_ready.go:93] pod "etcd-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.955972   34022 pod_ready.go:82] duration metric: took 5.938111ms for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.955979   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.956028   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m02
	I0927 00:37:56.956037   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.956044   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.956048   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.958144   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.958682   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:56.958694   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:56.958702   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:56.958707   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:56.960779   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:56.961169   34022 pod_ready.go:93] pod "etcd-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:56.961183   34022 pod_ready.go:82] duration metric: took 5.19893ms for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:56.961195   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.112502   34022 request.go:632] Waited for 151.252386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:37:57.112559   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:37:57.112565   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.112572   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.112576   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.115770   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.312171   34022 request.go:632] Waited for 195.713659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:57.312216   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:57.312221   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.312229   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.312232   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.315816   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.316859   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:57.316874   34022 pod_ready.go:82] duration metric: took 355.673456ms for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.316882   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.511936   34022 request.go:632] Waited for 194.980446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:37:57.512026   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:37:57.512043   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.512054   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.512063   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.515153   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.712254   34022 request.go:632] Waited for 196.382367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:57.712356   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:57.712368   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.712378   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.712386   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.716196   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:57.716807   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:57.716829   34022 pod_ready.go:82] duration metric: took 399.939153ms for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.716844   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:57.912822   34022 request.go:632] Waited for 195.90758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:37:57.912904   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:37:57.912912   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:57.912922   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:57.912933   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:57.916051   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.112039   34022 request.go:632] Waited for 195.329642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.112122   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.112127   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.112136   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.112143   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.115508   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.115975   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.115994   34022 pod_ready.go:82] duration metric: took 399.142534ms for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.116003   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.312103   34022 request.go:632] Waited for 196.038569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:37:58.312152   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:37:58.312162   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.312170   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.312174   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.314795   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:58.511939   34022 request.go:632] Waited for 196.327635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:58.511988   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:58.511994   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.512003   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.512010   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.515560   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.516257   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.516284   34022 pod_ready.go:82] duration metric: took 400.272757ms for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.516296   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.712241   34022 request.go:632] Waited for 195.877878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:37:58.712303   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:37:58.712310   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.712331   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.712385   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.715681   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:58.911944   34022 request.go:632] Waited for 195.32001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.912017   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:58.912022   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:58.912029   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:58.912033   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:58.914780   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:37:58.915682   34022 pod_ready.go:93] pod "kube-proxy-7n244" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:58.915708   34022 pod_ready.go:82] duration metric: took 399.399725ms for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:58.915722   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.112621   34022 request.go:632] Waited for 196.830611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:37:59.112695   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:37:59.112702   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.112711   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.112717   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.116056   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.312264   34022 request.go:632] Waited for 195.403458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:59.312315   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:37:59.312320   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.312371   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.312391   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.315926   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.316477   34022 pod_ready.go:93] pod "kube-proxy-x2hvh" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:59.316499   34022 pod_ready.go:82] duration metric: took 400.770291ms for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.316508   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.511836   34022 request.go:632] Waited for 195.271471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:37:59.511920   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:37:59.511931   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.511939   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.511948   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.515136   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.712221   34022 request.go:632] Waited for 196.384821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:59.712289   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:37:59.712294   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.712302   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.712309   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.715391   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:37:59.716333   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:37:59.716356   34022 pod_ready.go:82] duration metric: took 399.841544ms for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.716375   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:37:59.912751   34022 request.go:632] Waited for 196.300793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:37:59.912870   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:37:59.912884   34022 round_trippers.go:469] Request Headers:
	I0927 00:37:59.912894   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:37:59.912902   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:37:59.916551   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:00.112471   34022 request.go:632] Waited for 195.315992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:38:00.112520   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:38:00.112525   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.112532   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.112535   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.115509   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:38:00.116194   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:38:00.116211   34022 pod_ready.go:82] duration metric: took 399.824793ms for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:38:00.116221   34022 pod_ready.go:39] duration metric: took 3.200608197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:38:00.116243   34022 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:38:00.116294   34022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:38:00.135868   34022 api_server.go:72] duration metric: took 21.497115723s to wait for apiserver process to appear ...
	I0927 00:38:00.135895   34022 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:38:00.135917   34022 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0927 00:38:00.140183   34022 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0927 00:38:00.140253   34022 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0927 00:38:00.140266   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.140276   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.140279   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.141056   34022 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0927 00:38:00.141139   34022 api_server.go:141] control plane version: v1.31.1
	I0927 00:38:00.141154   34022 api_server.go:131] duration metric: took 5.252594ms to wait for apiserver health ...
	I0927 00:38:00.141160   34022 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:38:00.312479   34022 request.go:632] Waited for 171.239847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.312534   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.312539   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.312546   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.312551   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.317803   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:38:00.322748   34022 system_pods.go:59] 17 kube-system pods found
	I0927 00:38:00.322780   34022 system_pods.go:61] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:38:00.322785   34022 system_pods.go:61] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:38:00.322788   34022 system_pods.go:61] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:38:00.322791   34022 system_pods.go:61] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:38:00.322794   34022 system_pods.go:61] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:38:00.322797   34022 system_pods.go:61] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:38:00.322800   34022 system_pods.go:61] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:38:00.322804   34022 system_pods.go:61] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:38:00.322807   34022 system_pods.go:61] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:38:00.322811   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:38:00.322814   34022 system_pods.go:61] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:38:00.322817   34022 system_pods.go:61] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:38:00.322819   34022 system_pods.go:61] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:38:00.322822   34022 system_pods.go:61] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:38:00.322826   34022 system_pods.go:61] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:38:00.322829   34022 system_pods.go:61] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:38:00.322832   34022 system_pods.go:61] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:38:00.322837   34022 system_pods.go:74] duration metric: took 181.672494ms to wait for pod list to return data ...
	I0927 00:38:00.322843   34022 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:38:00.512235   34022 request.go:632] Waited for 189.330159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:38:00.512297   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:38:00.512302   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.512309   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.512313   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.517819   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:38:00.518071   34022 default_sa.go:45] found service account: "default"
	I0927 00:38:00.518095   34022 default_sa.go:55] duration metric: took 195.245876ms for default service account to be created ...
	I0927 00:38:00.518107   34022 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:38:00.712113   34022 request.go:632] Waited for 193.916786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.712176   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:38:00.712183   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.712193   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.712199   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.716946   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:00.721442   34022 system_pods.go:86] 17 kube-system pods found
	I0927 00:38:00.721467   34022 system_pods.go:89] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:38:00.721472   34022 system_pods.go:89] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:38:00.721476   34022 system_pods.go:89] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:38:00.721479   34022 system_pods.go:89] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:38:00.721482   34022 system_pods.go:89] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:38:00.721486   34022 system_pods.go:89] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:38:00.721489   34022 system_pods.go:89] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:38:00.721493   34022 system_pods.go:89] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:38:00.721496   34022 system_pods.go:89] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:38:00.721500   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:38:00.721503   34022 system_pods.go:89] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:38:00.721506   34022 system_pods.go:89] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:38:00.721510   34022 system_pods.go:89] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:38:00.721512   34022 system_pods.go:89] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:38:00.721515   34022 system_pods.go:89] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:38:00.721518   34022 system_pods.go:89] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:38:00.721520   34022 system_pods.go:89] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:38:00.721525   34022 system_pods.go:126] duration metric: took 203.413353ms to wait for k8s-apps to be running ...
	I0927 00:38:00.721531   34022 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:38:00.721569   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:00.736846   34022 system_svc.go:56] duration metric: took 15.307058ms WaitForService to wait for kubelet
	I0927 00:38:00.736868   34022 kubeadm.go:582] duration metric: took 22.09812477s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:38:00.736883   34022 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:38:00.912548   34022 request.go:632] Waited for 175.604909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0927 00:38:00.912614   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0927 00:38:00.912620   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:00.912629   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:00.912637   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:00.916934   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:00.918457   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:38:00.918481   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:38:00.918495   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:38:00.918500   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:38:00.918505   34022 node_conditions.go:105] duration metric: took 181.617208ms to run NodePressure ...
	I0927 00:38:00.918514   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:38:00.918536   34022 start.go:255] writing updated cluster config ...
	I0927 00:38:00.920669   34022 out.go:201] 
	I0927 00:38:00.922354   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:00.922437   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:00.924101   34022 out.go:177] * Starting "ha-631834-m03" control-plane node in "ha-631834" cluster
	I0927 00:38:00.925280   34022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:38:00.925296   34022 cache.go:56] Caching tarball of preloaded images
	I0927 00:38:00.925400   34022 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:38:00.925413   34022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:38:00.925494   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:00.925653   34022 start.go:360] acquireMachinesLock for ha-631834-m03: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:38:00.925710   34022 start.go:364] duration metric: took 40.934µs to acquireMachinesLock for "ha-631834-m03"
	I0927 00:38:00.925731   34022 start.go:93] Provisioning new machine with config: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:00.925834   34022 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0927 00:38:00.927492   34022 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 00:38:00.927590   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:00.927628   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:00.942435   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0927 00:38:00.942900   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:00.943351   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:00.943370   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:00.943711   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:00.943853   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:00.943978   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:00.944142   34022 start.go:159] libmachine.API.Create for "ha-631834" (driver="kvm2")
	I0927 00:38:00.944167   34022 client.go:168] LocalClient.Create starting
	I0927 00:38:00.944197   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 00:38:00.944234   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:38:00.944249   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:38:00.944293   34022 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 00:38:00.944314   34022 main.go:141] libmachine: Decoding PEM data...
	I0927 00:38:00.944324   34022 main.go:141] libmachine: Parsing certificate...
	I0927 00:38:00.944337   34022 main.go:141] libmachine: Running pre-create checks...
	I0927 00:38:00.944345   34022 main.go:141] libmachine: (ha-631834-m03) Calling .PreCreateCheck
	I0927 00:38:00.944509   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:00.944854   34022 main.go:141] libmachine: Creating machine...
	I0927 00:38:00.944866   34022 main.go:141] libmachine: (ha-631834-m03) Calling .Create
	I0927 00:38:00.945006   34022 main.go:141] libmachine: (ha-631834-m03) Creating KVM machine...
	I0927 00:38:00.946130   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found existing default KVM network
	I0927 00:38:00.946246   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found existing private KVM network mk-ha-631834
	I0927 00:38:00.946370   34022 main.go:141] libmachine: (ha-631834-m03) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 ...
	I0927 00:38:00.946396   34022 main.go:141] libmachine: (ha-631834-m03) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:38:00.946450   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:00.946342   34779 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:38:00.946538   34022 main.go:141] libmachine: (ha-631834-m03) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:38:01.172256   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.172126   34779 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa...
	I0927 00:38:01.300878   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.300754   34779 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/ha-631834-m03.rawdisk...
	I0927 00:38:01.300913   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Writing magic tar header
	I0927 00:38:01.300930   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Writing SSH key tar header
	I0927 00:38:01.300947   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:01.300907   34779 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 ...
	I0927 00:38:01.301077   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03
	I0927 00:38:01.301177   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 00:38:01.301201   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03 (perms=drwx------)
	I0927 00:38:01.301210   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:38:01.301221   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 00:38:01.301229   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:38:01.301238   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:38:01.301243   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Checking permissions on dir: /home
	I0927 00:38:01.301252   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Skipping /home - not owner
	I0927 00:38:01.301261   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:38:01.301272   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 00:38:01.301340   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 00:38:01.301369   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:38:01.301385   34022 main.go:141] libmachine: (ha-631834-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:38:01.301397   34022 main.go:141] libmachine: (ha-631834-m03) Creating domain...
	I0927 00:38:01.302347   34022 main.go:141] libmachine: (ha-631834-m03) define libvirt domain using xml: 
	I0927 00:38:01.302369   34022 main.go:141] libmachine: (ha-631834-m03) <domain type='kvm'>
	I0927 00:38:01.302379   34022 main.go:141] libmachine: (ha-631834-m03)   <name>ha-631834-m03</name>
	I0927 00:38:01.302387   34022 main.go:141] libmachine: (ha-631834-m03)   <memory unit='MiB'>2200</memory>
	I0927 00:38:01.302396   34022 main.go:141] libmachine: (ha-631834-m03)   <vcpu>2</vcpu>
	I0927 00:38:01.302403   34022 main.go:141] libmachine: (ha-631834-m03)   <features>
	I0927 00:38:01.302416   34022 main.go:141] libmachine: (ha-631834-m03)     <acpi/>
	I0927 00:38:01.302423   34022 main.go:141] libmachine: (ha-631834-m03)     <apic/>
	I0927 00:38:01.302428   34022 main.go:141] libmachine: (ha-631834-m03)     <pae/>
	I0927 00:38:01.302434   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302439   34022 main.go:141] libmachine: (ha-631834-m03)   </features>
	I0927 00:38:01.302446   34022 main.go:141] libmachine: (ha-631834-m03)   <cpu mode='host-passthrough'>
	I0927 00:38:01.302451   34022 main.go:141] libmachine: (ha-631834-m03)   
	I0927 00:38:01.302457   34022 main.go:141] libmachine: (ha-631834-m03)   </cpu>
	I0927 00:38:01.302482   34022 main.go:141] libmachine: (ha-631834-m03)   <os>
	I0927 00:38:01.302504   34022 main.go:141] libmachine: (ha-631834-m03)     <type>hvm</type>
	I0927 00:38:01.302517   34022 main.go:141] libmachine: (ha-631834-m03)     <boot dev='cdrom'/>
	I0927 00:38:01.302528   34022 main.go:141] libmachine: (ha-631834-m03)     <boot dev='hd'/>
	I0927 00:38:01.302541   34022 main.go:141] libmachine: (ha-631834-m03)     <bootmenu enable='no'/>
	I0927 00:38:01.302550   34022 main.go:141] libmachine: (ha-631834-m03)   </os>
	I0927 00:38:01.302558   34022 main.go:141] libmachine: (ha-631834-m03)   <devices>
	I0927 00:38:01.302567   34022 main.go:141] libmachine: (ha-631834-m03)     <disk type='file' device='cdrom'>
	I0927 00:38:01.302594   34022 main.go:141] libmachine: (ha-631834-m03)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/boot2docker.iso'/>
	I0927 00:38:01.302616   34022 main.go:141] libmachine: (ha-631834-m03)       <target dev='hdc' bus='scsi'/>
	I0927 00:38:01.302629   34022 main.go:141] libmachine: (ha-631834-m03)       <readonly/>
	I0927 00:38:01.302639   34022 main.go:141] libmachine: (ha-631834-m03)     </disk>
	I0927 00:38:01.302651   34022 main.go:141] libmachine: (ha-631834-m03)     <disk type='file' device='disk'>
	I0927 00:38:01.302663   34022 main.go:141] libmachine: (ha-631834-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:38:01.302681   34022 main.go:141] libmachine: (ha-631834-m03)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/ha-631834-m03.rawdisk'/>
	I0927 00:38:01.302695   34022 main.go:141] libmachine: (ha-631834-m03)       <target dev='hda' bus='virtio'/>
	I0927 00:38:01.302706   34022 main.go:141] libmachine: (ha-631834-m03)     </disk>
	I0927 00:38:01.302713   34022 main.go:141] libmachine: (ha-631834-m03)     <interface type='network'>
	I0927 00:38:01.302718   34022 main.go:141] libmachine: (ha-631834-m03)       <source network='mk-ha-631834'/>
	I0927 00:38:01.302725   34022 main.go:141] libmachine: (ha-631834-m03)       <model type='virtio'/>
	I0927 00:38:01.302733   34022 main.go:141] libmachine: (ha-631834-m03)     </interface>
	I0927 00:38:01.302743   34022 main.go:141] libmachine: (ha-631834-m03)     <interface type='network'>
	I0927 00:38:01.302756   34022 main.go:141] libmachine: (ha-631834-m03)       <source network='default'/>
	I0927 00:38:01.302769   34022 main.go:141] libmachine: (ha-631834-m03)       <model type='virtio'/>
	I0927 00:38:01.302780   34022 main.go:141] libmachine: (ha-631834-m03)     </interface>
	I0927 00:38:01.302786   34022 main.go:141] libmachine: (ha-631834-m03)     <serial type='pty'>
	I0927 00:38:01.302798   34022 main.go:141] libmachine: (ha-631834-m03)       <target port='0'/>
	I0927 00:38:01.302806   34022 main.go:141] libmachine: (ha-631834-m03)     </serial>
	I0927 00:38:01.302811   34022 main.go:141] libmachine: (ha-631834-m03)     <console type='pty'>
	I0927 00:38:01.302824   34022 main.go:141] libmachine: (ha-631834-m03)       <target type='serial' port='0'/>
	I0927 00:38:01.302835   34022 main.go:141] libmachine: (ha-631834-m03)     </console>
	I0927 00:38:01.302846   34022 main.go:141] libmachine: (ha-631834-m03)     <rng model='virtio'>
	I0927 00:38:01.302853   34022 main.go:141] libmachine: (ha-631834-m03)       <backend model='random'>/dev/random</backend>
	I0927 00:38:01.302860   34022 main.go:141] libmachine: (ha-631834-m03)     </rng>
	I0927 00:38:01.302867   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302871   34022 main.go:141] libmachine: (ha-631834-m03)     
	I0927 00:38:01.302876   34022 main.go:141] libmachine: (ha-631834-m03)   </devices>
	I0927 00:38:01.302885   34022 main.go:141] libmachine: (ha-631834-m03) </domain>
	I0927 00:38:01.302891   34022 main.go:141] libmachine: (ha-631834-m03) 
	I0927 00:38:01.309656   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4f:aa:cd in network default
	I0927 00:38:01.310171   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring networks are active...
	I0927 00:38:01.310187   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:01.310859   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring network default is active
	I0927 00:38:01.311183   34022 main.go:141] libmachine: (ha-631834-m03) Ensuring network mk-ha-631834 is active
	I0927 00:38:01.311550   34022 main.go:141] libmachine: (ha-631834-m03) Getting domain xml...
	I0927 00:38:01.312351   34022 main.go:141] libmachine: (ha-631834-m03) Creating domain...
	I0927 00:38:02.542322   34022 main.go:141] libmachine: (ha-631834-m03) Waiting to get IP...
	I0927 00:38:02.542980   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:02.543377   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:02.543426   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:02.543365   34779 retry.go:31] will retry after 295.787312ms: waiting for machine to come up
	I0927 00:38:02.840874   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:02.841334   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:02.841363   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:02.841297   34779 retry.go:31] will retry after 248.489193ms: waiting for machine to come up
	I0927 00:38:03.091718   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:03.092118   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:03.092144   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:03.092091   34779 retry.go:31] will retry after 441.574448ms: waiting for machine to come up
	I0927 00:38:03.535897   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:03.536373   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:03.536426   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:03.536344   34779 retry.go:31] will retry after 516.671192ms: waiting for machine to come up
	I0927 00:38:04.054938   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:04.055415   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:04.055448   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:04.055376   34779 retry.go:31] will retry after 716.952406ms: waiting for machine to come up
	I0927 00:38:04.774184   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:04.774597   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:04.774626   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:04.774544   34779 retry.go:31] will retry after 932.879879ms: waiting for machine to come up
	I0927 00:38:05.710264   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:05.710744   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:05.710771   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:05.710689   34779 retry.go:31] will retry after 865.055707ms: waiting for machine to come up
	I0927 00:38:06.577372   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:06.577736   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:06.577763   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:06.577713   34779 retry.go:31] will retry after 1.070388843s: waiting for machine to come up
	I0927 00:38:07.649656   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:07.650114   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:07.650136   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:07.650079   34779 retry.go:31] will retry after 1.328681925s: waiting for machine to come up
	I0927 00:38:08.980362   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:08.980901   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:08.980930   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:08.980854   34779 retry.go:31] will retry after 1.891343357s: waiting for machine to come up
	I0927 00:38:10.874136   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:10.874597   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:10.874626   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:10.874547   34779 retry.go:31] will retry after 1.77968387s: waiting for machine to come up
	I0927 00:38:12.656346   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:12.656707   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:12.656734   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:12.656661   34779 retry.go:31] will retry after 2.690596335s: waiting for machine to come up
	I0927 00:38:15.349488   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:15.349902   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:15.349938   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:15.349838   34779 retry.go:31] will retry after 3.212522074s: waiting for machine to come up
	I0927 00:38:18.564307   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:18.564733   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find current IP address of domain ha-631834-m03 in network mk-ha-631834
	I0927 00:38:18.564759   34022 main.go:141] libmachine: (ha-631834-m03) DBG | I0927 00:38:18.564688   34779 retry.go:31] will retry after 5.536998184s: waiting for machine to come up
	I0927 00:38:24.105735   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.106267   34022 main.go:141] libmachine: (ha-631834-m03) Found IP for machine: 192.168.39.92
	I0927 00:38:24.106298   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has current primary IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.106307   34022 main.go:141] libmachine: (ha-631834-m03) Reserving static IP address...
	I0927 00:38:24.106789   34022 main.go:141] libmachine: (ha-631834-m03) DBG | unable to find host DHCP lease matching {name: "ha-631834-m03", mac: "52:54:00:4c:25:39", ip: "192.168.39.92"} in network mk-ha-631834
	I0927 00:38:24.178177   34022 main.go:141] libmachine: (ha-631834-m03) Reserved static IP address: 192.168.39.92
	I0927 00:38:24.178214   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Getting to WaitForSSH function...
	I0927 00:38:24.178222   34022 main.go:141] libmachine: (ha-631834-m03) Waiting for SSH to be available...
	I0927 00:38:24.180785   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.181172   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.181205   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.181352   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using SSH client type: external
	I0927 00:38:24.181375   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa (-rw-------)
	I0927 00:38:24.181402   34022 main.go:141] libmachine: (ha-631834-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:38:24.181416   34022 main.go:141] libmachine: (ha-631834-m03) DBG | About to run SSH command:
	I0927 00:38:24.181425   34022 main.go:141] libmachine: (ha-631834-m03) DBG | exit 0
	I0927 00:38:24.307152   34022 main.go:141] libmachine: (ha-631834-m03) DBG | SSH cmd err, output: <nil>: 
	I0927 00:38:24.307447   34022 main.go:141] libmachine: (ha-631834-m03) KVM machine creation complete!
	I0927 00:38:24.307763   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:24.308355   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:24.308580   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:24.308729   34022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:38:24.308741   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetState
	I0927 00:38:24.310053   34022 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:38:24.310069   34022 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:38:24.310082   34022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:38:24.310091   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.312140   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.312456   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.312481   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.312582   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.312762   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.312951   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.313095   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.313255   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.313466   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.313480   34022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:38:24.422933   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:38:24.422970   34022 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:38:24.422980   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.426661   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.427100   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.427125   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.427318   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.427511   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.427638   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.427791   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.427987   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.428244   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.428263   34022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:38:24.540183   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:38:24.540244   34022 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:38:24.540253   34022 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:38:24.540261   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.540508   34022 buildroot.go:166] provisioning hostname "ha-631834-m03"
	I0927 00:38:24.540530   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.540689   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.543040   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.543414   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.543443   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.543611   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.543765   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.543907   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.544102   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.544311   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.544483   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.544499   34022 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834-m03 && echo "ha-631834-m03" | sudo tee /etc/hostname
	I0927 00:38:24.670921   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834-m03
	
	I0927 00:38:24.670950   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.673565   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.673864   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.673890   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.674020   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.674183   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.674310   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.674419   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.674647   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:24.674798   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:24.674812   34022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:38:24.791979   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:38:24.792005   34022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:38:24.792027   34022 buildroot.go:174] setting up certificates
	I0927 00:38:24.792036   34022 provision.go:84] configureAuth start
	I0927 00:38:24.792044   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetMachineName
	I0927 00:38:24.792291   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:24.794829   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.795183   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.795216   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.795380   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.797351   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.797611   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.797635   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.797733   34022 provision.go:143] copyHostCerts
	I0927 00:38:24.797765   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:38:24.797804   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:38:24.797814   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:38:24.797876   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:38:24.797945   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:38:24.797964   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:38:24.797980   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:38:24.798015   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:38:24.798060   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:38:24.798079   34022 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:38:24.798086   34022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:38:24.798115   34022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:38:24.798186   34022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834-m03 san=[127.0.0.1 192.168.39.92 ha-631834-m03 localhost minikube]
	I0927 00:38:24.887325   34022 provision.go:177] copyRemoteCerts
	I0927 00:38:24.887388   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:38:24.887417   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:24.889796   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.890201   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:24.890231   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:24.890378   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:24.890525   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:24.890673   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:24.890757   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:24.974577   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:38:24.974640   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:38:24.998800   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:38:24.998882   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:38:25.023015   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:38:25.023097   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 00:38:25.047091   34022 provision.go:87] duration metric: took 255.040854ms to configureAuth
	I0927 00:38:25.047129   34022 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:38:25.047386   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:25.047470   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.050122   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.050450   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.050478   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.050639   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.050791   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.050936   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.051044   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.051180   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:25.051392   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:25.051410   34022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:38:25.271341   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:38:25.271367   34022 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:38:25.271379   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetURL
	I0927 00:38:25.272505   34022 main.go:141] libmachine: (ha-631834-m03) DBG | Using libvirt version 6000000
	I0927 00:38:25.274516   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.274843   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.274868   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.275000   34022 main.go:141] libmachine: Docker is up and running!
	I0927 00:38:25.275010   34022 main.go:141] libmachine: Reticulating splines...
	I0927 00:38:25.275018   34022 client.go:171] duration metric: took 24.330841027s to LocalClient.Create
	I0927 00:38:25.275044   34022 start.go:167] duration metric: took 24.330903271s to libmachine.API.Create "ha-631834"
	I0927 00:38:25.275059   34022 start.go:293] postStartSetup for "ha-631834-m03" (driver="kvm2")
	I0927 00:38:25.275078   34022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:38:25.275102   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.275329   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:38:25.275358   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.277447   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.277789   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.277809   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.277981   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.278138   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.278294   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.278392   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.363118   34022 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:38:25.367416   34022 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:38:25.367440   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:38:25.367494   34022 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:38:25.367565   34022 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:38:25.367574   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:38:25.367651   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:38:25.377433   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:38:25.402022   34022 start.go:296] duration metric: took 126.949525ms for postStartSetup
	I0927 00:38:25.402069   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetConfigRaw
	I0927 00:38:25.402606   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:25.405298   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.405691   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.405718   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.406069   34022 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:38:25.406300   34022 start.go:128] duration metric: took 24.480456335s to createHost
	I0927 00:38:25.406329   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.408691   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.409060   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.409076   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.409274   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.409443   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.409610   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.409745   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.409905   34022 main.go:141] libmachine: Using SSH client type: native
	I0927 00:38:25.410111   34022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0927 00:38:25.410124   34022 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:38:25.520084   34022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727397505.498121645
	
	I0927 00:38:25.520105   34022 fix.go:216] guest clock: 1727397505.498121645
	I0927 00:38:25.520112   34022 fix.go:229] Guest: 2024-09-27 00:38:25.498121645 +0000 UTC Remote: 2024-09-27 00:38:25.406314622 +0000 UTC m=+144.706814205 (delta=91.807023ms)
	I0927 00:38:25.520126   34022 fix.go:200] guest clock delta is within tolerance: 91.807023ms
	I0927 00:38:25.520131   34022 start.go:83] releasing machines lock for "ha-631834-m03", held for 24.594409944s
	I0927 00:38:25.520153   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.520388   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:25.523018   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.523441   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.523469   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.525631   34022 out.go:177] * Found network options:
	I0927 00:38:25.527157   34022 out.go:177]   - NO_PROXY=192.168.39.4,192.168.39.184
	W0927 00:38:25.528442   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 00:38:25.528464   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:38:25.528477   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.528981   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.529153   34022 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:38:25.529222   34022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:38:25.529262   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	W0927 00:38:25.529362   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 00:38:25.529390   34022 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 00:38:25.529477   34022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:38:25.529503   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:38:25.532028   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532225   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532427   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.532453   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532602   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.532629   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:25.532655   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:25.532783   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.532794   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:38:25.532975   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:38:25.532976   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.533132   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.533194   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:38:25.533378   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:38:25.772033   34022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:38:25.777746   34022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:38:25.777803   34022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:38:25.795383   34022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:38:25.795403   34022 start.go:495] detecting cgroup driver to use...
	I0927 00:38:25.795486   34022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:38:25.812841   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:38:25.827240   34022 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:38:25.827295   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:38:25.841149   34022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:38:25.855688   34022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:38:25.975549   34022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:38:26.132600   34022 docker.go:233] disabling docker service ...
	I0927 00:38:26.132671   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:38:26.147138   34022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:38:26.160283   34022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:38:26.280885   34022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:38:26.397744   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:38:26.412063   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:38:26.431067   34022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:38:26.431183   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.443586   34022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:38:26.443649   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.455922   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.466779   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.478101   34022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:38:26.489198   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.499613   34022 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.517900   34022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:38:26.528412   34022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:38:26.537702   34022 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:38:26.537761   34022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:38:26.550744   34022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:38:26.561809   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:26.685216   34022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:38:26.784033   34022 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:38:26.784095   34022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:38:26.788971   34022 start.go:563] Will wait 60s for crictl version
	I0927 00:38:26.789022   34022 ssh_runner.go:195] Run: which crictl
	I0927 00:38:26.792579   34022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:38:26.834879   34022 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:38:26.834941   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:38:26.863131   34022 ssh_runner.go:195] Run: crio --version
	I0927 00:38:26.894968   34022 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:38:26.896312   34022 out.go:177]   - env NO_PROXY=192.168.39.4
	I0927 00:38:26.897668   34022 out.go:177]   - env NO_PROXY=192.168.39.4,192.168.39.184
	I0927 00:38:26.898968   34022 main.go:141] libmachine: (ha-631834-m03) Calling .GetIP
	I0927 00:38:26.901618   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:26.901952   34022 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:38:26.901974   34022 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:38:26.902162   34022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:38:26.906490   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:38:26.920023   34022 mustload.go:65] Loading cluster: ha-631834
	I0927 00:38:26.920246   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:26.920507   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:26.920541   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:26.934985   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44565
	I0927 00:38:26.935403   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:26.935900   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:26.935918   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:26.936235   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:26.936414   34022 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:38:26.937691   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:38:26.938068   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:26.938115   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:26.952338   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0927 00:38:26.952802   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:26.953261   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:26.953279   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:26.953560   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:26.953830   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:38:26.953987   34022 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.92
	I0927 00:38:26.954001   34022 certs.go:194] generating shared ca certs ...
	I0927 00:38:26.954018   34022 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:26.954172   34022 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:38:26.954225   34022 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:38:26.954237   34022 certs.go:256] generating profile certs ...
	I0927 00:38:26.954335   34022 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:38:26.954364   34022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea
	I0927 00:38:26.954384   34022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.184 192.168.39.92 192.168.39.254]
	I0927 00:38:27.144960   34022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea ...
	I0927 00:38:27.144988   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea: {Name:mk59d4f754d56457d5c6119e00c5a757fdf5824a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:27.145181   34022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea ...
	I0927 00:38:27.145196   34022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea: {Name:mkf2be3579ffd641dd346a6606b22a9fb2324402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:38:27.145291   34022 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.a958d4ea -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:38:27.145420   34022 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.a958d4ea -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:38:27.145538   34022 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:38:27.145552   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:38:27.145565   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:38:27.145577   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:38:27.145592   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:38:27.145605   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:38:27.145617   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:38:27.145628   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:38:27.163436   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:38:27.163551   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:38:27.163586   34022 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:38:27.163596   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:38:27.163623   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:38:27.163645   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:38:27.163668   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:38:27.163704   34022 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:38:27.163738   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.163752   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.163764   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.163800   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:38:27.166902   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:27.167258   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:38:27.167285   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:27.167436   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:38:27.167603   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:38:27.167715   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:38:27.167869   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:38:27.247589   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 00:38:27.254078   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 00:38:27.266588   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 00:38:27.270741   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 00:38:27.281840   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 00:38:27.286146   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 00:38:27.296457   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 00:38:27.300347   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0927 00:38:27.311070   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 00:38:27.316218   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 00:38:27.329482   34022 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 00:38:27.338454   34022 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 00:38:27.355258   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:38:27.382658   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:38:27.405893   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:38:27.428247   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:38:27.451705   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0927 00:38:27.476691   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:38:27.501660   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:38:27.524660   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:38:27.551018   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:38:27.574913   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:38:27.597697   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:38:27.619996   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 00:38:27.636789   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 00:38:27.653361   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 00:38:27.669541   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0927 00:38:27.686266   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 00:38:27.702940   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 00:38:27.720590   34022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 00:38:27.736937   34022 ssh_runner.go:195] Run: openssl version
	I0927 00:38:27.742470   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:38:27.754273   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.758795   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.758847   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:38:27.764495   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:38:27.776262   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:38:27.787442   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.791854   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.791891   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:38:27.797397   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:38:27.808793   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:38:27.819765   34022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.823906   34022 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.823953   34022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:38:27.829381   34022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:38:27.840376   34022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:38:27.844373   34022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:38:27.844420   34022 kubeadm.go:934] updating node {m03 192.168.39.92 8443 v1.31.1 crio true true} ...
	I0927 00:38:27.844516   34022 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:38:27.844551   34022 kube-vip.go:115] generating kube-vip config ...
	I0927 00:38:27.844579   34022 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:38:27.862311   34022 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:38:27.862375   34022 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:38:27.862434   34022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:38:27.872781   34022 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 00:38:27.872832   34022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 00:38:27.882613   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0927 00:38:27.882653   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:27.882614   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0927 00:38:27.882718   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:38:27.882614   34022 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 00:38:27.882757   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:38:27.882780   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:38:27.882851   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:38:27.898547   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 00:38:27.898582   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 00:38:27.898586   34022 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:38:27.898611   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 00:38:27.898635   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 00:38:27.898671   34022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:38:27.928975   34022 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 00:38:27.929019   34022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 00:38:28.755845   34022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 00:38:28.766166   34022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 00:38:28.784929   34022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:38:28.802956   34022 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 00:38:28.819722   34022 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:38:28.823558   34022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:38:28.836368   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:28.952315   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:38:28.969758   34022 host.go:66] Checking if "ha-631834" exists ...
	I0927 00:38:28.970098   34022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:38:28.970147   34022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:28.986122   34022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0927 00:38:28.986560   34022 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:28.987020   34022 main.go:141] libmachine: Using API Version  1
	I0927 00:38:28.987038   34022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:28.987386   34022 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:28.987567   34022 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:38:28.987723   34022 start.go:317] joinCluster: &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:38:28.987854   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 00:38:28.987874   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:38:28.991221   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:28.991756   34022 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:38:28.991779   34022 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:38:28.991933   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:38:28.992065   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:38:28.992196   34022 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:38:28.992330   34022 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:38:29.166799   34022 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:29.166840   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nyp4wh.a7l7uv1svmghw4iw --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m03 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443"
	I0927 00:38:50.894049   34022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nyp4wh.a7l7uv1svmghw4iw --discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-631834-m03 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443": (21.727186901s)
	I0927 00:38:50.894086   34022 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 00:38:51.430363   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-631834-m03 minikube.k8s.io/updated_at=2024_09_27T00_38_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=ha-631834 minikube.k8s.io/primary=false
	I0927 00:38:51.580467   34022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-631834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 00:38:51.702639   34022 start.go:319] duration metric: took 22.714914062s to joinCluster
	I0927 00:38:51.702703   34022 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:38:51.703011   34022 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:38:51.703981   34022 out.go:177] * Verifying Kubernetes components...
	I0927 00:38:51.706308   34022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:38:51.993118   34022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:38:52.039442   34022 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:38:52.039732   34022 kapi.go:59] client config for ha-631834: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 00:38:52.039793   34022 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0927 00:38:52.040085   34022 node_ready.go:35] waiting up to 6m0s for node "ha-631834-m03" to be "Ready" ...
	I0927 00:38:52.040186   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:52.040198   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:52.040211   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:52.040218   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:52.044122   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:52.540842   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:52.540865   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:52.540875   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:52.540880   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:52.544531   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:53.040343   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:53.040364   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:53.040376   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:53.040380   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:53.043889   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:53.540829   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:53.540853   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:53.540865   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:53.540871   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:53.544102   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:54.040457   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:54.040486   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:54.040498   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:54.040508   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:54.044080   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:54.044692   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:54.540544   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:54.540565   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:54.540577   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:54.540583   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:54.544108   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:55.040995   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:55.041014   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:55.041022   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:55.041026   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:55.044186   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:55.541131   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:55.541149   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:55.541155   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:55.541159   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:55.544421   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:56.040678   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:56.040699   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:56.040717   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:56.040724   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:56.044252   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:56.044964   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:56.540268   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:56.540298   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:56.540320   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:56.540326   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:56.544327   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:57.041238   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:57.041258   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:57.041266   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:57.041270   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:57.044588   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:57.541127   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:57.541150   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:57.541158   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:57.541162   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:57.545682   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:38:58.040341   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:58.040358   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:58.040365   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:58.040370   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:58.044102   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:58.541229   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:58.541250   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:58.541260   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:58.541266   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:58.545253   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:58.545941   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:38:59.040786   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:59.040810   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:59.040821   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:59.040826   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:59.044532   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:38:59.540476   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:38:59.540500   34022 round_trippers.go:469] Request Headers:
	I0927 00:38:59.540512   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:38:59.540518   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:38:59.546237   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:00.040296   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:00.040324   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:00.040333   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:00.040340   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:00.043125   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:00.541170   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:00.541190   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:00.541199   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:00.541204   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:00.544199   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:01.041077   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:01.041108   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:01.041120   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:01.041128   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:01.044323   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:01.044952   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:01.540257   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:01.540278   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:01.540286   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:01.540290   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:01.543567   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:02.040508   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:02.040527   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:02.040534   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:02.040538   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:02.043399   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:02.540909   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:02.540930   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:02.540940   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:02.540944   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:02.544479   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.040484   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:03.040506   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:03.040516   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:03.040524   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:03.043891   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.540961   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:03.540985   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:03.540998   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:03.541004   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:03.544529   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:03.545350   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:04.041102   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:04.041123   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:04.041131   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:04.041135   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:04.046364   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:04.541106   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:04.541126   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:04.541134   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:04.541143   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:04.546084   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:05.040284   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:05.040305   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:05.040316   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:05.040321   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:05.044656   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:05.540520   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:05.540541   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:05.540549   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:05.540553   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:05.543933   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:06.040933   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:06.040960   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:06.040968   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:06.040972   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:06.044262   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:06.045234   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:06.540620   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:06.540642   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:06.540650   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:06.540655   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:06.543993   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:07.040742   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:07.040762   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:07.040769   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:07.040773   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:07.044207   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:07.541217   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:07.541238   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:07.541246   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:07.541250   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:07.544549   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:08.040522   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:08.040543   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:08.040551   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:08.040555   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:08.044379   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:08.540580   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:08.540599   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:08.540610   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:08.540614   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:08.543564   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:08.544141   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:09.041048   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:09.041080   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:09.041090   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:09.041096   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:09.044654   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:09.540899   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:09.540923   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:09.540933   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:09.540937   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:09.544281   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.040837   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:10.040856   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:10.040864   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:10.040868   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:10.044767   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.540532   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:10.540551   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:10.540558   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:10.540560   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:10.543816   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:10.544420   34022 node_ready.go:53] node "ha-631834-m03" has status "Ready":"False"
	I0927 00:39:11.041033   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.041053   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.041062   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.041066   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.044226   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.044735   34022 node_ready.go:49] node "ha-631834-m03" has status "Ready":"True"
	I0927 00:39:11.044751   34022 node_ready.go:38] duration metric: took 19.004641333s for node "ha-631834-m03" to be "Ready" ...
	I0927 00:39:11.044759   34022 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:39:11.044826   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:11.044836   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.044843   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.044847   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.050350   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:11.057101   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.057173   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-479dv
	I0927 00:39:11.057179   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.057186   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.057192   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.059921   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.060545   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.060562   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.060568   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.060571   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.063003   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.063383   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.063397   34022 pod_ready.go:82] duration metric: took 6.275685ms for pod "coredns-7c65d6cfc9-479dv" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.063405   34022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.063458   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kg8kf
	I0927 00:39:11.063466   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.063472   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.063477   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.065828   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.066447   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.066464   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.066475   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.066480   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.068743   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.069387   34022 pod_ready.go:93] pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.069408   34022 pod_ready.go:82] duration metric: took 5.996652ms for pod "coredns-7c65d6cfc9-kg8kf" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.069420   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.069482   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834
	I0927 00:39:11.069493   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.069502   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.069510   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.071542   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.072035   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.072047   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.072054   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.072059   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.074524   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.075087   34022 pod_ready.go:93] pod "etcd-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.075106   34022 pod_ready.go:82] duration metric: took 5.678675ms for pod "etcd-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.075115   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.075158   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m02
	I0927 00:39:11.075166   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.075172   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.075177   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.077457   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.078140   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:11.078155   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.078162   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.078166   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.080308   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:11.080796   34022 pod_ready.go:93] pod "etcd-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.080816   34022 pod_ready.go:82] duration metric: took 5.694556ms for pod "etcd-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.080827   34022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.241112   34022 request.go:632] Waited for 160.229406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m03
	I0927 00:39:11.241190   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-631834-m03
	I0927 00:39:11.241202   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.241213   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.241221   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.244515   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.441468   34022 request.go:632] Waited for 196.217118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.441557   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:11.441564   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.441575   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.441580   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.445651   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:11.446311   34022 pod_ready.go:93] pod "etcd-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.446338   34022 pod_ready.go:82] duration metric: took 365.498163ms for pod "etcd-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.446361   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.641363   34022 request.go:632] Waited for 194.923565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:39:11.641498   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834
	I0927 00:39:11.641520   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.641531   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.641539   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.646049   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:11.841994   34022 request.go:632] Waited for 195.392366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.842046   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:11.842053   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:11.842060   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:11.842064   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:11.845122   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:11.845566   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:11.845583   34022 pod_ready.go:82] duration metric: took 399.214359ms for pod "kube-apiserver-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:11.845596   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.041393   34022 request.go:632] Waited for 195.729881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:39:12.041458   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m02
	I0927 00:39:12.041466   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.041478   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.041488   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.044854   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.241780   34022 request.go:632] Waited for 196.198597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:12.241855   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:12.241862   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.241870   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.241880   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.245475   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.246124   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:12.246146   34022 pod_ready.go:82] duration metric: took 400.543035ms for pod "kube-apiserver-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.246162   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.441106   34022 request.go:632] Waited for 194.872848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m03
	I0927 00:39:12.441163   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-631834-m03
	I0927 00:39:12.441169   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.441177   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.441181   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.444679   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.641949   34022 request.go:632] Waited for 196.340732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:12.642006   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:12.642011   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.642019   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.642026   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.645583   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:12.646336   34022 pod_ready.go:93] pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:12.646359   34022 pod_ready.go:82] duration metric: took 400.189129ms for pod "kube-apiserver-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.646371   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:12.841500   34022 request.go:632] Waited for 195.047763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:39:12.841554   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834
	I0927 00:39:12.841559   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:12.841565   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:12.841570   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:12.844885   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.042011   34022 request.go:632] Waited for 196.365336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:13.042068   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:13.042075   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.042086   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.042094   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.045463   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.046083   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.046099   34022 pod_ready.go:82] duration metric: took 399.717332ms for pod "kube-controller-manager-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.046117   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.241273   34022 request.go:632] Waited for 195.079725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:39:13.241342   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m02
	I0927 00:39:13.241350   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.241360   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.241371   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.244557   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.441283   34022 request.go:632] Waited for 196.073724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:13.441336   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:13.441342   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.441348   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.441353   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.444943   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.445609   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.445625   34022 pod_ready.go:82] duration metric: took 399.502321ms for pod "kube-controller-manager-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.445635   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.641730   34022 request.go:632] Waited for 196.022446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m03
	I0927 00:39:13.641795   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-631834-m03
	I0927 00:39:13.641804   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.641816   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.641825   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.645301   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:13.841195   34022 request.go:632] Waited for 195.27161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:13.841276   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:13.841286   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:13.841298   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:13.841306   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:13.844228   34022 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 00:39:13.844820   34022 pod_ready.go:93] pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:13.844837   34022 pod_ready.go:82] duration metric: took 399.196459ms for pod "kube-controller-manager-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:13.844849   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-22lcj" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.041259   34022 request.go:632] Waited for 196.353447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22lcj
	I0927 00:39:14.041346   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22lcj
	I0927 00:39:14.041361   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.041372   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.041381   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.044594   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.241701   34022 request.go:632] Waited for 196.342418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:14.241756   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:14.241771   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.241779   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.241786   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.244937   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.245574   34022 pod_ready.go:93] pod "kube-proxy-22lcj" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:14.245593   34022 pod_ready.go:82] duration metric: took 400.737693ms for pod "kube-proxy-22lcj" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.245602   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.441662   34022 request.go:632] Waited for 195.987258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:39:14.441711   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7n244
	I0927 00:39:14.441717   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.441723   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.441727   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.444886   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.642030   34022 request.go:632] Waited for 196.372014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:14.642111   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:14.642118   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.642125   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.642129   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.645645   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:14.646260   34022 pod_ready.go:93] pod "kube-proxy-7n244" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:14.646278   34022 pod_ready.go:82] duration metric: took 400.670776ms for pod "kube-proxy-7n244" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.646288   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:14.841368   34022 request.go:632] Waited for 195.014242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:39:14.841454   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2hvh
	I0927 00:39:14.841463   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:14.841470   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:14.841478   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:14.844791   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.041743   34022 request.go:632] Waited for 196.305022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.041798   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.041803   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.041810   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.041816   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.045475   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.045878   34022 pod_ready.go:93] pod "kube-proxy-x2hvh" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.045893   34022 pod_ready.go:82] duration metric: took 399.599097ms for pod "kube-proxy-x2hvh" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.045902   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.242003   34022 request.go:632] Waited for 196.041536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:39:15.242079   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834
	I0927 00:39:15.242093   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.242103   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.242113   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.246380   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:15.441144   34022 request.go:632] Waited for 194.281274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:15.441219   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834
	I0927 00:39:15.441224   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.441235   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.441240   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.444769   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.445492   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.445508   34022 pod_ready.go:82] duration metric: took 399.601315ms for pod "kube-scheduler-ha-631834" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.445517   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.641668   34022 request.go:632] Waited for 196.083523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:39:15.641741   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m02
	I0927 00:39:15.641746   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.641753   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.641757   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.645029   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.841624   34022 request.go:632] Waited for 196.133411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.841705   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m02
	I0927 00:39:15.841713   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:15.841721   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:15.841725   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:15.845075   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:15.845562   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:15.845579   34022 pod_ready.go:82] duration metric: took 400.056155ms for pod "kube-scheduler-ha-631834-m02" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:15.845590   34022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:16.041217   34022 request.go:632] Waited for 195.564347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m03
	I0927 00:39:16.041293   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-631834-m03
	I0927 00:39:16.041302   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.041310   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.041316   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.044981   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.241893   34022 request.go:632] Waited for 196.354511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:16.241965   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-631834-m03
	I0927 00:39:16.241973   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.241981   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.241990   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.245440   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.245881   34022 pod_ready.go:93] pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 00:39:16.245900   34022 pod_ready.go:82] duration metric: took 400.302015ms for pod "kube-scheduler-ha-631834-m03" in "kube-system" namespace to be "Ready" ...
	I0927 00:39:16.245911   34022 pod_ready.go:39] duration metric: took 5.201141408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:39:16.245931   34022 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:39:16.245980   34022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:39:16.264448   34022 api_server.go:72] duration metric: took 24.561705447s to wait for apiserver process to appear ...
	I0927 00:39:16.264471   34022 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:39:16.264489   34022 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0927 00:39:16.270998   34022 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0927 00:39:16.271071   34022 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0927 00:39:16.271077   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.271087   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.271098   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.272010   34022 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0927 00:39:16.272079   34022 api_server.go:141] control plane version: v1.31.1
	I0927 00:39:16.272094   34022 api_server.go:131] duration metric: took 7.617636ms to wait for apiserver health ...
	I0927 00:39:16.272101   34022 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:39:16.441376   34022 request.go:632] Waited for 169.205133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.441450   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.441459   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.441467   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.441472   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.447163   34022 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 00:39:16.454723   34022 system_pods.go:59] 24 kube-system pods found
	I0927 00:39:16.454748   34022 system_pods.go:61] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:39:16.454753   34022 system_pods.go:61] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:39:16.454757   34022 system_pods.go:61] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:39:16.454760   34022 system_pods.go:61] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:39:16.454763   34022 system_pods.go:61] "etcd-ha-631834-m03" [f0a5e835-8705-4555-8b6b-0c7147d76543] Running
	I0927 00:39:16.454767   34022 system_pods.go:61] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:39:16.454770   34022 system_pods.go:61] "kindnet-r2qxd" [68a590ef-4e98-409e-8ce3-4d4e3f14ccc1] Running
	I0927 00:39:16.454773   34022 system_pods.go:61] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:39:16.454776   34022 system_pods.go:61] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:39:16.454779   34022 system_pods.go:61] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:39:16.454782   34022 system_pods.go:61] "kube-apiserver-ha-631834-m03" [b5978123-4be5-4547-9f7a-17471dd88209] Running
	I0927 00:39:16.454786   34022 system_pods.go:61] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:39:16.454790   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:39:16.454793   34022 system_pods.go:61] "kube-controller-manager-ha-631834-m03" [ff5ac84f-5b97-45f7-8bc4-0def81f1a9de] Running
	I0927 00:39:16.454797   34022 system_pods.go:61] "kube-proxy-22lcj" [0bd00be4-643a-41b0-ba0b-3a13f95a3b45] Running
	I0927 00:39:16.454800   34022 system_pods.go:61] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:39:16.454804   34022 system_pods.go:61] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:39:16.454807   34022 system_pods.go:61] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:39:16.454810   34022 system_pods.go:61] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:39:16.454813   34022 system_pods.go:61] "kube-scheduler-ha-631834-m03" [48ea6dc3-fa35-4c78-8f49-f6cc2797f433] Running
	I0927 00:39:16.454816   34022 system_pods.go:61] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:39:16.454819   34022 system_pods.go:61] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:39:16.454822   34022 system_pods.go:61] "kube-vip-ha-631834-m03" [0ffe3c65-482c-49ce-a209-94414f2958b5] Running
	I0927 00:39:16.454828   34022 system_pods.go:61] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:39:16.454833   34022 system_pods.go:74] duration metric: took 182.725605ms to wait for pod list to return data ...
	I0927 00:39:16.454840   34022 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:39:16.641200   34022 request.go:632] Waited for 186.296503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:39:16.641254   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0927 00:39:16.641261   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.641270   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.641279   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.644742   34022 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 00:39:16.644853   34022 default_sa.go:45] found service account: "default"
	I0927 00:39:16.644867   34022 default_sa.go:55] duration metric: took 190.018813ms for default service account to be created ...
	I0927 00:39:16.644874   34022 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:39:16.841127   34022 request.go:632] Waited for 196.190225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.841217   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0927 00:39:16.841226   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:16.841234   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:16.841242   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:16.846111   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:16.853202   34022 system_pods.go:86] 24 kube-system pods found
	I0927 00:39:16.853229   34022 system_pods.go:89] "coredns-7c65d6cfc9-479dv" [ee318b64-2274-4106-93ed-9f62151107f1] Running
	I0927 00:39:16.853235   34022 system_pods.go:89] "coredns-7c65d6cfc9-kg8kf" [ee98faac-e03c-427f-9a78-2cf06d2f85cf] Running
	I0927 00:39:16.853239   34022 system_pods.go:89] "etcd-ha-631834" [b8f1f451-d21c-4424-876e-7bd03381c7be] Running
	I0927 00:39:16.853243   34022 system_pods.go:89] "etcd-ha-631834-m02" [940292d8-f09a-4baa-9689-2099794ed736] Running
	I0927 00:39:16.853246   34022 system_pods.go:89] "etcd-ha-631834-m03" [f0a5e835-8705-4555-8b6b-0c7147d76543] Running
	I0927 00:39:16.853249   34022 system_pods.go:89] "kindnet-l6ncl" [3861149b-7c67-4d48-9d24-8fa08aefda61] Running
	I0927 00:39:16.853253   34022 system_pods.go:89] "kindnet-r2qxd" [68a590ef-4e98-409e-8ce3-4d4e3f14ccc1] Running
	I0927 00:39:16.853256   34022 system_pods.go:89] "kindnet-x7kr9" [a4f57dcf-a410-46e7-a539-0ad5f9fb2baf] Running
	I0927 00:39:16.853260   34022 system_pods.go:89] "kube-apiserver-ha-631834" [365182f9-e6fd-40f4-8f9f-a46de26a61d8] Running
	I0927 00:39:16.853263   34022 system_pods.go:89] "kube-apiserver-ha-631834-m02" [bc22191d-9799-4639-8ff2-3fdb3ae97be3] Running
	I0927 00:39:16.853266   34022 system_pods.go:89] "kube-apiserver-ha-631834-m03" [b5978123-4be5-4547-9f7a-17471dd88209] Running
	I0927 00:39:16.853269   34022 system_pods.go:89] "kube-controller-manager-ha-631834" [4b0a02b1-60a5-45bc-b9a0-dd5a0346da3d] Running
	I0927 00:39:16.853273   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m02" [22f26e4f-f220-4682-ba5c-e3131880aab4] Running
	I0927 00:39:16.853276   34022 system_pods.go:89] "kube-controller-manager-ha-631834-m03" [ff5ac84f-5b97-45f7-8bc4-0def81f1a9de] Running
	I0927 00:39:16.853280   34022 system_pods.go:89] "kube-proxy-22lcj" [0bd00be4-643a-41b0-ba0b-3a13f95a3b45] Running
	I0927 00:39:16.853285   34022 system_pods.go:89] "kube-proxy-7n244" [d9fac118-1b31-4cf3-bc21-a4536e45a511] Running
	I0927 00:39:16.853288   34022 system_pods.go:89] "kube-proxy-x2hvh" [81ada94c-89b8-4815-92e9-58edd00ef64f] Running
	I0927 00:39:16.853291   34022 system_pods.go:89] "kube-scheduler-ha-631834" [9e0b9052-8574-406b-987f-2ef799f40533] Running
	I0927 00:39:16.853297   34022 system_pods.go:89] "kube-scheduler-ha-631834-m02" [7952ee5f-18be-4863-a13a-39c4ee7acf29] Running
	I0927 00:39:16.853302   34022 system_pods.go:89] "kube-scheduler-ha-631834-m03" [48ea6dc3-fa35-4c78-8f49-f6cc2797f433] Running
	I0927 00:39:16.853305   34022 system_pods.go:89] "kube-vip-ha-631834" [58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c] Running
	I0927 00:39:16.853308   34022 system_pods.go:89] "kube-vip-ha-631834-m02" [75b23ac9-b5e5-4a90-b5ef-951dd52c1752] Running
	I0927 00:39:16.853311   34022 system_pods.go:89] "kube-vip-ha-631834-m03" [0ffe3c65-482c-49ce-a209-94414f2958b5] Running
	I0927 00:39:16.853314   34022 system_pods.go:89] "storage-provisioner" [dbafe551-2645-4016-83f6-1133824d926d] Running
	I0927 00:39:16.853321   34022 system_pods.go:126] duration metric: took 208.44194ms to wait for k8s-apps to be running ...
	I0927 00:39:16.853329   34022 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:39:16.853371   34022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:39:16.870246   34022 system_svc.go:56] duration metric: took 16.907091ms WaitForService to wait for kubelet
	I0927 00:39:16.870275   34022 kubeadm.go:582] duration metric: took 25.167539771s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:39:16.870292   34022 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:39:17.041388   34022 request.go:632] Waited for 171.008016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0927 00:39:17.041444   34022 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0927 00:39:17.041452   34022 round_trippers.go:469] Request Headers:
	I0927 00:39:17.041462   34022 round_trippers.go:473]     Accept: application/json, */*
	I0927 00:39:17.041467   34022 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 00:39:17.045727   34022 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 00:39:17.046668   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046684   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046709   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046713   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046717   34022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:39:17.046720   34022 node_conditions.go:123] node cpu capacity is 2
	I0927 00:39:17.046725   34022 node_conditions.go:105] duration metric: took 176.429276ms to run NodePressure ...
	I0927 00:39:17.046735   34022 start.go:241] waiting for startup goroutines ...
	I0927 00:39:17.046755   34022 start.go:255] writing updated cluster config ...
	I0927 00:39:17.047027   34022 ssh_runner.go:195] Run: rm -f paused
	I0927 00:39:17.097240   34022 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:39:17.099385   34022 out.go:177] * Done! kubectl is now configured to use "ha-631834" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.045693321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c1b620b-7e8a-4649-abeb-dd7a1b3d5ecb name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.047009563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de33564d-f67d-4f26-96c9-29fe7ce5c6c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.047656413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397788047628477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de33564d-f67d-4f26-96c9-29fe7ce5c6c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.048177218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01e847d7-dae9-4abb-a3cb-2743f6bd95d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.048276700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01e847d7-dae9-4abb-a3cb-2743f6bd95d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.048496959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01e847d7-dae9-4abb-a3cb-2743f6bd95d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.084572621Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=abc79b88-020e-4bb1-82d5-2dd8952553be name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.084836731Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hczmj,Uid:55e4dd58-9193-49ba-a2e8-1c6835898fb1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397558330820881,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:39:18.015402395Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-479dv,Uid:ee318b64-2274-4106-93ed-9f62151107f1,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1727397416284003471,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.971385863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:dbafe551-2645-4016-83f6-1133824d926d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397416280773776,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T00:36:55.969309352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kg8kf,Uid:ee98faac-e03c-427f-9a78-2cf06d2f85cf,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727397416265889136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.959296032Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&PodSandboxMetadata{Name:kindnet-l6ncl,Uid:3861149b-7c67-4d48-9d24-8fa08aefda61,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397403804322011,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.462190063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&PodSandboxMetadata{Name:kube-proxy-7n244,Uid:d9fac118-1b31-4cf3-bc21-a4536e45a511,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397403803732849,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.473610313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&PodSandboxMetadata{Name:etcd-ha-631834,Uid:2a32cc8b63ea212ed38709daf6762cc1,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1727397392159704302,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 2a32cc8b63ea212ed38709daf6762cc1,kubernetes.io/config.seen: 2024-09-27T00:36:31.631709370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-631834,Uid:71a28d11a5db44bbf2777b262efa1514,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397392156637222,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 71a28d11a5db44bbf2777b262efa1514,kubernetes.io/config.seen: 2024-09-27T00:36:31.631711688Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-631834,Uid:10057dece9752ed428ddf4bfd465bb3d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397392123638188,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10057dece9752ed428ddf4bfd465bb3d,kubernetes.io/config.seen: 2024-09-27T00:36:31.631712772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:710e2b00db1780a3cb652f
ad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-631834,Uid:e3f83edb960a7290e67f3d1729807ccd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397392115397331,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{kubernetes.io/config.hash: e3f83edb960a7290e67f3d1729807ccd,kubernetes.io/config.seen: 2024-09-27T00:36:31.631706084Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-631834,Uid:afee14d1206143c4d719c111467c379b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727397392111883552,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.4:8443,kubernetes.io/config.hash: afee14d1206143c4d719c111467c379b,kubernetes.io/config.seen: 2024-09-27T00:36:31.631710672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=abc79b88-020e-4bb1-82d5-2dd8952553be name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.085809630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40d202d9-81ad-40a7-851e-38d72f078686 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.085896991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40d202d9-81ad-40a7-851e-38d72f078686 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.086755489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40d202d9-81ad-40a7-851e-38d72f078686 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.096514955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6de3385-3c18-4ba9-8086-45fc85a8ee20 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.096585648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6de3385-3c18-4ba9-8086-45fc85a8ee20 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.097901535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f8f231f-0c77-433c-a58c-52be8f07db2d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.098357329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397788098337005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f8f231f-0c77-433c-a58c-52be8f07db2d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.098887309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c392537-a515-41e1-ae4b-4bc64895f9a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.098963783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c392537-a515-41e1-ae4b-4bc64895f9a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.099179086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c392537-a515-41e1-ae4b-4bc64895f9a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.140627948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72b4a7b5-3edc-4cd6-a578-a4557be1346f name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.140700229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72b4a7b5-3edc-4cd6-a578-a4557be1346f name=/runtime.v1.RuntimeService/Version
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.142339451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72f711cc-ac0f-487a-9dc6-04b30d558063 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.142738119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397788142716319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72f711cc-ac0f-487a-9dc6-04b30d558063 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.143575662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2abcd7e-cb2b-48e0-9e30-10110c5a2dc4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.143631261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2abcd7e-cb2b-48e0-9e30-10110c5a2dc4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:43:08 ha-631834 crio[661]: time="2024-09-27 00:43:08.143851938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727397561973673539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416531750974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727397416548806637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f2637b4124e6d3087dd4a694ebb58286309afd46d561d6051eaaf6ba88126a,PodSandboxId:399bb953593cc2b3743577abae1f7410c1d14dc409256b74dd104c335e4a19a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727397416493017043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17273974
04353382193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727397404131622207,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad,PodSandboxId:710e2b00db1780a3cb652fad6898ecff25d5f37f052ba6e0438aa39b3ff2ada9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727397395791349240,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f83edb960a7290e67f3d1729807ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727397392427437868,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727397392442661616,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e,PodSandboxId:4a215208b0ed2928db08b226477bc8cf664180903da62b51aaf986d8c212336c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727397392387673966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db,PodSandboxId:8e73f2182b892b451dcd1c013adf2711f2f406765703f34eb3d44a64d29e882b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727397392278746359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2abcd7e-cb2b-48e0-9e30-10110c5a2dc4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74dc20e31bc6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   ebc71356fe886       busybox-7dff88458-hczmj
	f0d4e929a59ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   2cb3143c36c8e       coredns-7c65d6cfc9-479dv
	3c06ebd9099a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8f236d02ca028       coredns-7c65d6cfc9-kg8kf
	a9f2637b4124e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   399bb953593cc       storage-provisioner
	805b55d391308       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   7e2d35a1098a1       kindnet-l6ncl
	182f24ac501b7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   c0f5b32248925       kube-proxy-7n244
	555c7e8f6d518       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   710e2b00db178       kube-vip-ha-631834
	536c1c26f6d72       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   de8c10edafaa7       etcd-ha-631834
	5c88792788fc2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   74609d9fcf5f5       kube-scheduler-ha-631834
	aa717868fa66e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   4a215208b0ed2       kube-controller-manager-ha-631834
	5dcaba50a39a2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   8e73f2182b892       kube-apiserver-ha-631834
	
	
	==> coredns [3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930] <==
	[INFO] 10.244.1.2:33318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158302s
	[INFO] 10.244.1.2:38992 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210731s
	[INFO] 10.244.1.2:33288 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154244s
	[INFO] 10.244.2.2:52842 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181224s
	[INFO] 10.244.2.2:39802 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001542919s
	[INFO] 10.244.2.2:47825 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115718s
	[INFO] 10.244.2.2:38071 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153076s
	[INFO] 10.244.0.4:46433 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001871874s
	[INFO] 10.244.0.4:34697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054557s
	[INFO] 10.244.1.2:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014886s
	[INFO] 10.244.2.2:34064 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136896s
	[INFO] 10.244.0.4:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149012s
	[INFO] 10.244.0.4:40833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014405s
	[INFO] 10.244.0.4:44560 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077158s
	[INFO] 10.244.0.4:46143 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000171018s
	[INFO] 10.244.1.2:56595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249758s
	[INFO] 10.244.1.2:34731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198874s
	[INFO] 10.244.1.2:47614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132758s
	[INFO] 10.244.1.2:36248 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015406s
	[INFO] 10.244.2.2:34744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136863s
	[INFO] 10.244.2.2:34972 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094616s
	[INFO] 10.244.2.2:52746 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078955s
	[INFO] 10.244.0.4:39419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113274s
	[INFO] 10.244.0.4:59554 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106105s
	[INFO] 10.244.0.4:39476 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054775s
	
	
	==> coredns [f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427] <==
	[INFO] 10.244.0.4:52853 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001421962s
	[INFO] 10.244.0.4:51515 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000078302s
	[INFO] 10.244.1.2:35739 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003265682s
	[INFO] 10.244.1.2:48683 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000243904s
	[INFO] 10.244.1.2:60448 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000155544s
	[INFO] 10.244.1.2:49238 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002742907s
	[INFO] 10.244.1.2:42211 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125195s
	[INFO] 10.244.2.2:33655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213093s
	[INFO] 10.244.2.2:58995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171984s
	[INFO] 10.244.2.2:39964 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.2.2:60456 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227691s
	[INFO] 10.244.0.4:44954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086981s
	[INFO] 10.244.0.4:47547 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166142s
	[INFO] 10.244.0.4:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214916s
	[INFO] 10.244.0.4:52871 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284904s
	[INFO] 10.244.0.4:55577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216348s
	[INFO] 10.244.0.4:39280 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003939s
	[INFO] 10.244.1.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133643s
	[INFO] 10.244.1.2:60581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156682s
	[INFO] 10.244.1.2:47815 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000931s
	[INFO] 10.244.2.2:51419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149958s
	[INFO] 10.244.2.2:54004 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114296s
	[INFO] 10.244.2.2:50685 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087762s
	[INFO] 10.244.2.2:42257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189679s
	[INFO] 10.244.0.4:51433 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015471s
	
	
	==> describe nodes <==
	Name:               ha-631834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_36_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:43:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:39:43 +0000   Fri, 27 Sep 2024 00:36:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-631834
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c835097a3f3f47119274822a90643a61
	  System UUID:                c835097a-3f3f-4711-9274-822a90643a61
	  Boot ID:                    773a1f71-cccf-4b35-8274-d80167988c3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hczmj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 coredns-7c65d6cfc9-479dv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 coredns-7c65d6cfc9-kg8kf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 etcd-ha-631834                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m30s
	  kube-system                 kindnet-l6ncl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m25s
	  kube-system                 kube-apiserver-ha-631834             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-ha-631834    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-proxy-7n244                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-scheduler-ha-631834             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-vip-ha-631834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m23s  kube-proxy       
	  Normal  Starting                 6m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s  kubelet          Node ha-631834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s  kubelet          Node ha-631834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s  kubelet          Node ha-631834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m26s  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal  NodeReady                6m13s  kubelet          Node ha-631834 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal  RegisteredNode           4m12s  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	
	
	Name:               ha-631834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_37_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:37:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:40:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:39:37 +0000   Fri, 27 Sep 2024 00:41:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-631834-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 949992430050476bb475912d3f8b70cc
	  System UUID:                94999243-0050-476b-b475-912d3f8b70cc
	  Boot ID:                    53eb24e2-e661-44e8-b798-be320838fb5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bkws6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-631834-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-x7kr9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-631834-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-631834-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-x2hvh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-631834-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-vip-ha-631834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node ha-631834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  NodeNotReady             117s                   node-controller  Node ha-631834-m02 status is now: NodeNotReady
	
	
	Name:               ha-631834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_38_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:38:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:43:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:38:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:39:49 +0000   Fri, 27 Sep 2024 00:39:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-631834-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a890346e739943359cb952ef92382de4
	  System UUID:                a890346e-7399-4335-9cb9-52ef92382de4
	  Boot ID:                    8ca25526-4cfd-4aaa-ab8a-4e67ba42c0bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dhthf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-631834-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kindnet-r2qxd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m21s
	  kube-system                 kube-apiserver-ha-631834-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-ha-631834-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-22lcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-scheduler-ha-631834-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-vip-ha-631834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node ha-631834-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	
	
	Name:               ha-631834-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_39_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:39:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:42:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:39:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:40:25 +0000   Fri, 27 Sep 2024 00:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-631834-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d5a4987d2674227bf93c72f5a77697a
	  System UUID:                7d5a4987-d267-4227-bf93-c72f5a77697a
	  Boot ID:                    8a8b1cc4-fbfe-41cb-b018-a0e1cc80311a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-667b4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-klfbb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m14s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m14s)  kubelet          Node ha-631834-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m14s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-631834-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep27 00:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050412] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.794291] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.536823] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.593813] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.987708] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.063056] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056033] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.197880] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.118226] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.294623] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.981056] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.053805] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.059938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.871905] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.091402] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.727187] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.324064] kauditd_printk_skb: 41 callbacks suppressed
	[Sep27 00:37] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3] <==
	{"level":"warn","ts":"2024-09-27T00:43:08.228794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.236712Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.312679Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.184:2380/version","remote-member-id":"bff0a92d56623d2","error":"Get \"https://192.168.39.184:2380/version\": dial tcp 192.168.39.184:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-27T00:43:08.312770Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"bff0a92d56623d2","error":"Get \"https://192.168.39.184:2380/version\": dial tcp 192.168.39.184:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-27T00:43:08.328291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.399751Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.408109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.412019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.422987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.429006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.430951Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.437324Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.440660Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.443716Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.449100Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.478270Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.492647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.500161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.503635Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.506320Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.518392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.525090Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.528443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.531679Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T00:43:08.570461Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:43:08 up 7 min,  0 users,  load average: 0.08, 0.23, 0.14
	Linux ha-631834 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327] <==
	I0927 00:42:35.601795       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:45.594144       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:42:45.594344       1 main.go:299] handling current node
	I0927 00:42:45.594373       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:42:45.594393       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:42:45.594565       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:42:45.594590       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:42:45.594654       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:42:45.594673       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:55.603184       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:42:55.603559       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:42:55.603878       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:42:55.604117       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:42:55.604402       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:42:55.605203       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:42:55.605426       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:42:55.605486       1 main.go:299] handling current node
	I0927 00:43:05.602126       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:43:05.602341       1 main.go:299] handling current node
	I0927 00:43:05.602397       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:43:05.602427       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:43:05.602660       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:43:05.602776       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:43:05.602992       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:43:05.603043       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db] <==
	W0927 00:36:37.440538       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0927 00:36:37.441493       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:36:37.445496       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 00:36:37.662456       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 00:36:38.560626       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 00:36:38.578403       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 00:36:38.587470       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 00:36:43.266579       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 00:36:43.419243       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0927 00:39:23.576104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42282: use of closed network connection
	E0927 00:39:23.771378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42288: use of closed network connection
	E0927 00:39:23.958682       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42312: use of closed network connection
	E0927 00:39:24.143404       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42328: use of closed network connection
	E0927 00:39:24.321615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42334: use of closed network connection
	E0927 00:39:24.507069       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42338: use of closed network connection
	E0927 00:39:24.675789       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42344: use of closed network connection
	E0927 00:39:24.862695       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42368: use of closed network connection
	E0927 00:39:25.041111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42388: use of closed network connection
	E0927 00:39:25.329470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42408: use of closed network connection
	E0927 00:39:25.500386       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42428: use of closed network connection
	E0927 00:39:25.675043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42456: use of closed network connection
	E0927 00:39:25.857940       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42472: use of closed network connection
	E0927 00:39:26.048116       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42494: use of closed network connection
	E0927 00:39:26.224537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42512: use of closed network connection
	W0927 00:40:47.323187       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4 192.168.39.92]
	
	
	==> kube-controller-manager [aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e] <==
	I0927 00:39:55.139474       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-631834-m04" podCIDRs=["10.244.3.0/24"]
	I0927 00:39:55.139580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.139638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.151590       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.487083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:55.877769       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:56.804153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:57.666169       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-631834-m04"
	I0927 00:39:57.666534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:57.746088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:58.632655       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:39:58.726762       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:05.284426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:15.865636       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-631834-m04"
	I0927 00:40:15.865833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:15.879964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:16.781479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:40:25.730749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:41:11.808076       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-631834-m04"
	I0927 00:41:11.809299       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:11.832517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:11.890510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.873766ms"
	I0927 00:41:11.890734       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.505µs"
	I0927 00:41:12.743419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:41:17.028342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	
	
	==> kube-proxy [182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:36:44.513192       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:36:44.529245       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E0927 00:36:44.529395       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:36:44.637324       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:36:44.637425       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:36:44.637464       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:36:44.640935       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:36:44.641713       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:36:44.641798       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:36:44.643999       1 config.go:199] "Starting service config controller"
	I0927 00:36:44.644892       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:36:44.645302       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:36:44.645338       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:36:44.648337       1 config.go:328] "Starting node config controller"
	I0927 00:36:44.650849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:36:44.748412       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:36:44.748475       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:36:44.752495       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac] <==
	W0927 00:36:35.715895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:36:35.716591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.715936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:36:35.718435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.715973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:36:35.718562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:35.719580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:35.719853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.589565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:36.589679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.648438       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:36:36.648499       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:36:36.655529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:36:36.655821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.677521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:36:36.677870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.687963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:36:36.688163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:36:36.985650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:36:36.985711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0927 00:36:38.790470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 00:39:55.242771       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	E0927 00:39:55.242960       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 583b6ea7-5b96-43a8-9f06-70c031554c0e(kube-system/kindnet-7gjcd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7gjcd"
	E0927 00:39:55.243000       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" pod="kube-system/kindnet-7gjcd"
	I0927 00:39:55.243040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	
	
	==> kubelet <==
	Sep 27 00:41:38 ha-631834 kubelet[1309]: E0927 00:41:38.620020    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397698619762113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:38 ha-631834 kubelet[1309]: E0927 00:41:38.620049    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397698619762113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:48 ha-631834 kubelet[1309]: E0927 00:41:48.622830    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397708621937313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:48 ha-631834 kubelet[1309]: E0927 00:41:48.622875    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397708621937313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:58 ha-631834 kubelet[1309]: E0927 00:41:58.624102    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397718623839780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:41:58 ha-631834 kubelet[1309]: E0927 00:41:58.624145    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397718623839780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:08 ha-631834 kubelet[1309]: E0927 00:42:08.626464    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397728626075698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:08 ha-631834 kubelet[1309]: E0927 00:42:08.626520    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397728626075698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:18 ha-631834 kubelet[1309]: E0927 00:42:18.630268    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397738629150202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:18 ha-631834 kubelet[1309]: E0927 00:42:18.630612    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397738629150202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:28 ha-631834 kubelet[1309]: E0927 00:42:28.632510    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397748632150911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:28 ha-631834 kubelet[1309]: E0927 00:42:28.632817    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397748632150911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.503597    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:42:38 ha-631834 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:42:38 ha-631834 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.634672    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397758634392335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:38 ha-631834 kubelet[1309]: E0927 00:42:38.634711    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397758634392335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:48 ha-631834 kubelet[1309]: E0927 00:42:48.636173    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397768635813162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:48 ha-631834 kubelet[1309]: E0927 00:42:48.636541    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397768635813162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:58 ha-631834 kubelet[1309]: E0927 00:42:58.638644    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397778638333338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:42:58 ha-631834 kubelet[1309]: E0927 00:42:58.638684    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397778638333338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:43:08 ha-631834 kubelet[1309]: E0927 00:43:08.640756    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397788640496715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:43:08 ha-631834 kubelet[1309]: E0927 00:43:08.640968    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397788640496715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-631834 -n ha-631834
helpers_test.go:261: (dbg) Run:  kubectl --context ha-631834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (359.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-631834 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-631834 -v=7 --alsologtostderr
E0927 00:45:10.486519   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-631834 -v=7 --alsologtostderr: exit status 82 (2m1.859216492s)

                                                
                                                
-- stdout --
	* Stopping node "ha-631834-m04"  ...
	* Stopping node "ha-631834-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:43:13.759942   39196 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:43:13.760198   39196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:43:13.760207   39196 out.go:358] Setting ErrFile to fd 2...
	I0927 00:43:13.760211   39196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:43:13.760381   39196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:43:13.760587   39196 out.go:352] Setting JSON to false
	I0927 00:43:13.760671   39196 mustload.go:65] Loading cluster: ha-631834
	I0927 00:43:13.761048   39196 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:43:13.761167   39196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:43:13.761342   39196 mustload.go:65] Loading cluster: ha-631834
	I0927 00:43:13.761471   39196 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:43:13.761494   39196 stop.go:39] StopHost: ha-631834-m04
	I0927 00:43:13.761879   39196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:43:13.761921   39196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:43:13.776394   39196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I0927 00:43:13.776906   39196 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:43:13.777485   39196 main.go:141] libmachine: Using API Version  1
	I0927 00:43:13.777510   39196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:43:13.777862   39196 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:43:13.780905   39196 out.go:177] * Stopping node "ha-631834-m04"  ...
	I0927 00:43:13.782667   39196 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 00:43:13.782704   39196 main.go:141] libmachine: (ha-631834-m04) Calling .DriverName
	I0927 00:43:13.782920   39196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 00:43:13.782944   39196 main.go:141] libmachine: (ha-631834-m04) Calling .GetSSHHostname
	I0927 00:43:13.785753   39196 main.go:141] libmachine: (ha-631834-m04) DBG | domain ha-631834-m04 has defined MAC address 52:54:00:ec:35:28 in network mk-ha-631834
	I0927 00:43:13.786191   39196 main.go:141] libmachine: (ha-631834-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:35:28", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:39:41 +0000 UTC Type:0 Mac:52:54:00:ec:35:28 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-631834-m04 Clientid:01:52:54:00:ec:35:28}
	I0927 00:43:13.786221   39196 main.go:141] libmachine: (ha-631834-m04) DBG | domain ha-631834-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:ec:35:28 in network mk-ha-631834
	I0927 00:43:13.786326   39196 main.go:141] libmachine: (ha-631834-m04) Calling .GetSSHPort
	I0927 00:43:13.786483   39196 main.go:141] libmachine: (ha-631834-m04) Calling .GetSSHKeyPath
	I0927 00:43:13.786621   39196 main.go:141] libmachine: (ha-631834-m04) Calling .GetSSHUsername
	I0927 00:43:13.786748   39196 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m04/id_rsa Username:docker}
	I0927 00:43:13.872281   39196 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 00:43:13.925874   39196 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 00:43:13.979726   39196 main.go:141] libmachine: Stopping "ha-631834-m04"...
	I0927 00:43:13.979757   39196 main.go:141] libmachine: (ha-631834-m04) Calling .GetState
	I0927 00:43:13.981228   39196 main.go:141] libmachine: (ha-631834-m04) Calling .Stop
	I0927 00:43:13.984875   39196 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 0/120
	I0927 00:43:15.159661   39196 main.go:141] libmachine: (ha-631834-m04) Calling .GetState
	I0927 00:43:15.161062   39196 main.go:141] libmachine: Machine "ha-631834-m04" was stopped.
	I0927 00:43:15.161090   39196 stop.go:75] duration metric: took 1.378426235s to stop
	I0927 00:43:15.161107   39196 stop.go:39] StopHost: ha-631834-m03
	I0927 00:43:15.161396   39196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:43:15.161441   39196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:43:15.175650   39196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0927 00:43:15.176110   39196 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:43:15.176614   39196 main.go:141] libmachine: Using API Version  1
	I0927 00:43:15.176641   39196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:43:15.176953   39196 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:43:15.179262   39196 out.go:177] * Stopping node "ha-631834-m03"  ...
	I0927 00:43:15.180479   39196 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 00:43:15.180502   39196 main.go:141] libmachine: (ha-631834-m03) Calling .DriverName
	I0927 00:43:15.180711   39196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 00:43:15.180733   39196 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHHostname
	I0927 00:43:15.183747   39196 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:43:15.184318   39196 main.go:141] libmachine: (ha-631834-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:25:39", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:38:15 +0000 UTC Type:0 Mac:52:54:00:4c:25:39 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-631834-m03 Clientid:01:52:54:00:4c:25:39}
	I0927 00:43:15.184359   39196 main.go:141] libmachine: (ha-631834-m03) DBG | domain ha-631834-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4c:25:39 in network mk-ha-631834
	I0927 00:43:15.184490   39196 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHPort
	I0927 00:43:15.184680   39196 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHKeyPath
	I0927 00:43:15.184821   39196 main.go:141] libmachine: (ha-631834-m03) Calling .GetSSHUsername
	I0927 00:43:15.184952   39196 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m03/id_rsa Username:docker}
	I0927 00:43:15.276028   39196 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 00:43:15.331205   39196 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 00:43:15.386536   39196 main.go:141] libmachine: Stopping "ha-631834-m03"...
	I0927 00:43:15.386559   39196 main.go:141] libmachine: (ha-631834-m03) Calling .GetState
	I0927 00:43:15.388094   39196 main.go:141] libmachine: (ha-631834-m03) Calling .Stop
	I0927 00:43:15.391883   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 0/120
	I0927 00:43:16.393167   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 1/120
	I0927 00:43:17.394373   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 2/120
	I0927 00:43:18.395622   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 3/120
	I0927 00:43:19.396801   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 4/120
	I0927 00:43:20.398045   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 5/120
	I0927 00:43:21.399657   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 6/120
	I0927 00:43:22.400883   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 7/120
	I0927 00:43:23.402241   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 8/120
	I0927 00:43:24.403737   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 9/120
	I0927 00:43:25.405458   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 10/120
	I0927 00:43:26.406863   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 11/120
	I0927 00:43:27.408387   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 12/120
	I0927 00:43:28.409996   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 13/120
	I0927 00:43:29.411583   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 14/120
	I0927 00:43:30.413702   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 15/120
	I0927 00:43:31.415402   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 16/120
	I0927 00:43:32.416827   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 17/120
	I0927 00:43:33.418233   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 18/120
	I0927 00:43:34.419826   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 19/120
	I0927 00:43:35.421858   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 20/120
	I0927 00:43:36.423422   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 21/120
	I0927 00:43:37.425849   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 22/120
	I0927 00:43:38.427596   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 23/120
	I0927 00:43:39.430029   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 24/120
	I0927 00:43:40.432084   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 25/120
	I0927 00:43:41.433543   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 26/120
	I0927 00:43:42.435164   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 27/120
	I0927 00:43:43.436752   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 28/120
	I0927 00:43:44.438325   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 29/120
	I0927 00:43:45.440442   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 30/120
	I0927 00:43:46.442286   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 31/120
	I0927 00:43:47.443699   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 32/120
	I0927 00:43:48.445140   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 33/120
	I0927 00:43:49.446471   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 34/120
	I0927 00:43:50.448230   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 35/120
	I0927 00:43:51.449814   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 36/120
	I0927 00:43:52.451248   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 37/120
	I0927 00:43:53.452734   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 38/120
	I0927 00:43:54.454024   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 39/120
	I0927 00:43:55.455777   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 40/120
	I0927 00:43:56.457090   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 41/120
	I0927 00:43:57.458749   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 42/120
	I0927 00:43:58.460027   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 43/120
	I0927 00:43:59.461841   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 44/120
	I0927 00:44:00.463497   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 45/120
	I0927 00:44:01.465954   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 46/120
	I0927 00:44:02.467122   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 47/120
	I0927 00:44:03.468676   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 48/120
	I0927 00:44:04.469843   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 49/120
	I0927 00:44:05.471386   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 50/120
	I0927 00:44:06.472600   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 51/120
	I0927 00:44:07.473896   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 52/120
	I0927 00:44:08.475397   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 53/120
	I0927 00:44:09.476691   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 54/120
	I0927 00:44:10.478459   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 55/120
	I0927 00:44:11.479628   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 56/120
	I0927 00:44:12.481658   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 57/120
	I0927 00:44:13.482871   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 58/120
	I0927 00:44:14.484148   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 59/120
	I0927 00:44:15.485798   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 60/120
	I0927 00:44:16.486962   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 61/120
	I0927 00:44:17.488179   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 62/120
	I0927 00:44:18.489491   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 63/120
	I0927 00:44:19.490707   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 64/120
	I0927 00:44:20.492376   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 65/120
	I0927 00:44:21.493725   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 66/120
	I0927 00:44:22.495157   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 67/120
	I0927 00:44:23.496610   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 68/120
	I0927 00:44:24.497766   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 69/120
	I0927 00:44:25.499225   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 70/120
	I0927 00:44:26.500459   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 71/120
	I0927 00:44:27.501751   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 72/120
	I0927 00:44:28.502915   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 73/120
	I0927 00:44:29.504179   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 74/120
	I0927 00:44:30.505773   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 75/120
	I0927 00:44:31.507014   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 76/120
	I0927 00:44:32.508325   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 77/120
	I0927 00:44:33.509858   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 78/120
	I0927 00:44:34.511054   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 79/120
	I0927 00:44:35.512703   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 80/120
	I0927 00:44:36.513972   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 81/120
	I0927 00:44:37.515183   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 82/120
	I0927 00:44:38.516555   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 83/120
	I0927 00:44:39.517863   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 84/120
	I0927 00:44:40.519589   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 85/120
	I0927 00:44:41.520887   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 86/120
	I0927 00:44:42.522080   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 87/120
	I0927 00:44:43.523444   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 88/120
	I0927 00:44:44.524691   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 89/120
	I0927 00:44:45.526329   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 90/120
	I0927 00:44:46.527680   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 91/120
	I0927 00:44:47.529132   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 92/120
	I0927 00:44:48.530602   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 93/120
	I0927 00:44:49.531927   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 94/120
	I0927 00:44:50.533190   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 95/120
	I0927 00:44:51.534451   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 96/120
	I0927 00:44:52.535799   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 97/120
	I0927 00:44:53.537999   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 98/120
	I0927 00:44:54.539157   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 99/120
	I0927 00:44:55.541023   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 100/120
	I0927 00:44:56.542264   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 101/120
	I0927 00:44:57.543944   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 102/120
	I0927 00:44:58.545194   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 103/120
	I0927 00:44:59.546595   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 104/120
	I0927 00:45:00.548322   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 105/120
	I0927 00:45:01.549710   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 106/120
	I0927 00:45:02.550974   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 107/120
	I0927 00:45:03.552207   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 108/120
	I0927 00:45:04.554269   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 109/120
	I0927 00:45:05.555894   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 110/120
	I0927 00:45:06.557527   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 111/120
	I0927 00:45:07.559406   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 112/120
	I0927 00:45:08.560708   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 113/120
	I0927 00:45:09.562093   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 114/120
	I0927 00:45:10.564238   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 115/120
	I0927 00:45:11.565660   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 116/120
	I0927 00:45:12.567017   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 117/120
	I0927 00:45:13.568298   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 118/120
	I0927 00:45:14.569549   39196 main.go:141] libmachine: (ha-631834-m03) Waiting for machine to stop 119/120
	I0927 00:45:15.570076   39196 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 00:45:15.570137   39196 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0927 00:45:15.572046   39196 out.go:201] 
	W0927 00:45:15.573231   39196 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0927 00:45:15.573248   39196 out.go:270] * 
	* 
	W0927 00:45:15.575441   39196 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 00:45:15.576606   39196 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-631834 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-631834 --wait=true -v=7 --alsologtostderr
E0927 00:45:38.191367   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:48:01.244905   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-631834 --wait=true -v=7 --alsologtostderr: (3m55.098863914s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-631834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-631834 -n ha-631834
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 logs -n 25: (1.834484165s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m04 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp testdata/cp-test.txt                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834:/home/docker/cp-test_ha-631834-m04_ha-631834.txt                      |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834 sudo cat                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834.txt                                |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03:/home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m03 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-631834 node stop m02 -v=7                                                    | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-631834 node start m02 -v=7                                                   | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-631834 -v=7                                                          | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-631834 -v=7                                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-631834 --wait=true -v=7                                                   | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:45 UTC | 27 Sep 24 00:49 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-631834                                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:45:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:45:15.620382   39669 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:45:15.620522   39669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:45:15.620532   39669 out.go:358] Setting ErrFile to fd 2...
	I0927 00:45:15.620539   39669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:45:15.620751   39669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:45:15.621302   39669 out.go:352] Setting JSON to false
	I0927 00:45:15.622234   39669 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5261,"bootTime":1727392655,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:45:15.622326   39669 start.go:139] virtualization: kvm guest
	I0927 00:45:15.624489   39669 out.go:177] * [ha-631834] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:45:15.625903   39669 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:45:15.625920   39669 notify.go:220] Checking for updates...
	I0927 00:45:15.628160   39669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:45:15.629534   39669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:45:15.630776   39669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:45:15.632011   39669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:45:15.633204   39669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:45:15.634877   39669 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:45:15.634986   39669 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:45:15.635673   39669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:45:15.635736   39669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:45:15.651157   39669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0927 00:45:15.651683   39669 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:45:15.652236   39669 main.go:141] libmachine: Using API Version  1
	I0927 00:45:15.652268   39669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:45:15.652622   39669 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:45:15.652799   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:45:15.687905   39669 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 00:45:15.689273   39669 start.go:297] selected driver: kvm2
	I0927 00:45:15.689290   39669 start.go:901] validating driver "kvm2" against &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:45:15.689480   39669 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:45:15.689941   39669 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:45:15.690022   39669 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:45:15.705881   39669 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:45:15.706672   39669 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:45:15.706712   39669 cni.go:84] Creating CNI manager for ""
	I0927 00:45:15.706778   39669 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 00:45:15.706845   39669 start.go:340] cluster config:
	{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:45:15.707008   39669 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:45:15.708878   39669 out.go:177] * Starting "ha-631834" primary control-plane node in "ha-631834" cluster
	I0927 00:45:15.710078   39669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:45:15.710108   39669 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:45:15.710114   39669 cache.go:56] Caching tarball of preloaded images
	I0927 00:45:15.710189   39669 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:45:15.710200   39669 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:45:15.710312   39669 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:45:15.710505   39669 start.go:360] acquireMachinesLock for ha-631834: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:45:15.710544   39669 start.go:364] duration metric: took 22.057µs to acquireMachinesLock for "ha-631834"
	I0927 00:45:15.710556   39669 start.go:96] Skipping create...Using existing machine configuration
	I0927 00:45:15.710563   39669 fix.go:54] fixHost starting: 
	I0927 00:45:15.710830   39669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:45:15.710860   39669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:45:15.725125   39669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40481
	I0927 00:45:15.725617   39669 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:45:15.726108   39669 main.go:141] libmachine: Using API Version  1
	I0927 00:45:15.726129   39669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:45:15.726478   39669 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:45:15.726649   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:45:15.726797   39669 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:45:15.728316   39669 fix.go:112] recreateIfNeeded on ha-631834: state=Running err=<nil>
	W0927 00:45:15.728333   39669 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 00:45:15.730201   39669 out.go:177] * Updating the running kvm2 "ha-631834" VM ...
	I0927 00:45:15.731366   39669 machine.go:93] provisionDockerMachine start ...
	I0927 00:45:15.731387   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:45:15.731577   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:15.733917   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:15.734339   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:15.734365   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:15.734493   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:15.734637   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:15.734779   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:15.734893   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:15.735030   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:45:15.735252   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:45:15.735265   39669 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 00:45:15.865827   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834
	
	I0927 00:45:15.865873   39669 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:45:15.866109   39669 buildroot.go:166] provisioning hostname "ha-631834"
	I0927 00:45:15.866134   39669 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:45:15.866284   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:15.868858   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:15.869257   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:15.869289   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:15.869365   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:15.869512   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:15.869658   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:15.869882   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:15.870031   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:45:15.870196   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:45:15.870206   39669 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834 && echo "ha-631834" | sudo tee /etc/hostname
	I0927 00:45:15.999438   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834
	
	I0927 00:45:15.999464   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:16.002256   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.002596   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.002622   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.002789   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:16.002976   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.003132   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.003264   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:16.003419   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:45:16.003618   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:45:16.003634   39669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:45:16.124602   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:45:16.124633   39669 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:45:16.124674   39669 buildroot.go:174] setting up certificates
	I0927 00:45:16.124689   39669 provision.go:84] configureAuth start
	I0927 00:45:16.124703   39669 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:45:16.124958   39669 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:45:16.127674   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.128053   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.128071   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.128251   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:16.130467   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.130824   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.130850   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.130993   39669 provision.go:143] copyHostCerts
	I0927 00:45:16.131022   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:45:16.131077   39669 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:45:16.131089   39669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:45:16.131177   39669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:45:16.131282   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:45:16.131321   39669 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:45:16.131337   39669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:45:16.131379   39669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:45:16.131483   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:45:16.131506   39669 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:45:16.131511   39669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:45:16.131546   39669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:45:16.131611   39669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834 san=[127.0.0.1 192.168.39.4 ha-631834 localhost minikube]
	I0927 00:45:16.246173   39669 provision.go:177] copyRemoteCerts
	I0927 00:45:16.246258   39669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:45:16.246285   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:16.248804   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.249141   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.249168   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.249338   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:16.249518   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.249717   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:16.249845   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:45:16.338669   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:45:16.338752   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:45:16.366057   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:45:16.366143   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:45:16.392473   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:45:16.392544   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0927 00:45:16.418486   39669 provision.go:87] duration metric: took 293.782736ms to configureAuth
	I0927 00:45:16.418514   39669 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:45:16.418809   39669 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:45:16.418894   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:16.421316   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.421670   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.421696   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.421870   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:16.422053   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.422187   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.422322   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:16.422459   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:45:16.422660   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:45:16.422682   39669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:46:47.255178   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:46:47.255208   39669 machine.go:96] duration metric: took 1m31.5238267s to provisionDockerMachine
	I0927 00:46:47.255221   39669 start.go:293] postStartSetup for "ha-631834" (driver="kvm2")
	I0927 00:46:47.255234   39669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:46:47.255253   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.255565   39669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:46:47.255599   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.258683   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.259119   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.259146   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.259275   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.259451   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.259630   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.259763   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:46:47.346533   39669 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:46:47.350930   39669 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:46:47.350952   39669 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:46:47.351011   39669 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:46:47.351096   39669 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:46:47.351108   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:46:47.351226   39669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:46:47.360896   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:46:47.385556   39669 start.go:296] duration metric: took 130.322943ms for postStartSetup
	I0927 00:46:47.385594   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.385863   39669 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0927 00:46:47.385888   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.388244   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.388615   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.388638   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.388772   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.388955   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.389103   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.389210   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	W0927 00:46:47.473870   39669 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0927 00:46:47.473901   39669 fix.go:56] duration metric: took 1m31.763337076s for fixHost
	I0927 00:46:47.473927   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.476481   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.476835   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.476877   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.477009   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.477187   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.477331   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.477459   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.477588   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:46:47.477801   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:46:47.477814   39669 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:46:47.592268   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398007.556425815
	
	I0927 00:46:47.592290   39669 fix.go:216] guest clock: 1727398007.556425815
	I0927 00:46:47.592297   39669 fix.go:229] Guest: 2024-09-27 00:46:47.556425815 +0000 UTC Remote: 2024-09-27 00:46:47.473910129 +0000 UTC m=+91.887913645 (delta=82.515686ms)
	I0927 00:46:47.592315   39669 fix.go:200] guest clock delta is within tolerance: 82.515686ms
	I0927 00:46:47.592319   39669 start.go:83] releasing machines lock for "ha-631834", held for 1m31.881767828s
	I0927 00:46:47.592336   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.592579   39669 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:46:47.595053   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.595526   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.595559   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.595724   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.596182   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.596335   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.596460   39669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:46:47.596505   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.596528   39669 ssh_runner.go:195] Run: cat /version.json
	I0927 00:46:47.596545   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.598887   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.599331   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.599356   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.599374   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.599469   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.599627   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.599771   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.599771   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.599790   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.599920   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:46:47.599943   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.600051   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.600171   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.600248   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:46:47.688689   39669 ssh_runner.go:195] Run: systemctl --version
	I0927 00:46:47.714398   39669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:46:47.879371   39669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:46:47.886036   39669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:46:47.886106   39669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:46:47.897179   39669 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 00:46:47.897200   39669 start.go:495] detecting cgroup driver to use...
	I0927 00:46:47.897251   39669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:46:47.915667   39669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:46:47.932254   39669 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:46:47.932303   39669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:46:47.949419   39669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:46:47.965392   39669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:46:48.131365   39669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:46:48.287077   39669 docker.go:233] disabling docker service ...
	I0927 00:46:48.287148   39669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:46:48.308103   39669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:46:48.322916   39669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:46:48.493607   39669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:46:48.649560   39669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:46:48.663603   39669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:46:48.682388   39669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:46:48.682441   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.693147   39669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:46:48.693209   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.704362   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.715430   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.726552   39669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:46:48.737897   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.749082   39669 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.761612   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.772464   39669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:46:48.782645   39669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:46:48.792034   39669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:46:48.934287   39669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:46:49.970207   39669 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.035884557s)
	I0927 00:46:49.970237   39669 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:46:49.970288   39669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:46:49.975278   39669 start.go:563] Will wait 60s for crictl version
	I0927 00:46:49.975346   39669 ssh_runner.go:195] Run: which crictl
	I0927 00:46:49.979282   39669 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:46:50.016619   39669 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:46:50.016699   39669 ssh_runner.go:195] Run: crio --version
	I0927 00:46:50.045534   39669 ssh_runner.go:195] Run: crio --version
	I0927 00:46:50.077277   39669 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:46:50.078595   39669 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:46:50.081296   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:50.081618   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:50.081646   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:50.081876   39669 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:46:50.086621   39669 kubeadm.go:883] updating cluster {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:46:50.086742   39669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:46:50.086792   39669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:46:50.131171   39669 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:46:50.131190   39669 crio.go:433] Images already preloaded, skipping extraction
	I0927 00:46:50.131243   39669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:46:50.165747   39669 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:46:50.165769   39669 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:46:50.165780   39669 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.1 crio true true} ...
	I0927 00:46:50.165882   39669 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:46:50.165954   39669 ssh_runner.go:195] Run: crio config
	I0927 00:46:50.213216   39669 cni.go:84] Creating CNI manager for ""
	I0927 00:46:50.213240   39669 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 00:46:50.213249   39669 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:46:50.213300   39669 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-631834 NodeName:ha-631834 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:46:50.213486   39669 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-631834"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:46:50.213508   39669 kube-vip.go:115] generating kube-vip config ...
	I0927 00:46:50.213557   39669 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:46:50.225266   39669 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:46:50.225354   39669 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:46:50.225405   39669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:46:50.235071   39669 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:46:50.235137   39669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 00:46:50.244236   39669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0927 00:46:50.260449   39669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:46:50.276909   39669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0927 00:46:50.293310   39669 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 00:46:50.310086   39669 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:46:50.315169   39669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:46:50.457736   39669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:46:50.474066   39669 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.4
	I0927 00:46:50.474109   39669 certs.go:194] generating shared ca certs ...
	I0927 00:46:50.474129   39669 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:46:50.474269   39669 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:46:50.474305   39669 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:46:50.474314   39669 certs.go:256] generating profile certs ...
	I0927 00:46:50.474382   39669 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:46:50.474409   39669 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.938c64d2
	I0927 00:46:50.474423   39669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.938c64d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.184 192.168.39.92 192.168.39.254]
	I0927 00:46:50.646860   39669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.938c64d2 ...
	I0927 00:46:50.646893   39669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.938c64d2: {Name:mk1bb4e1a7b279c05f6cee4665ac52af09113e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:46:50.647055   39669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.938c64d2 ...
	I0927 00:46:50.647067   39669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.938c64d2: {Name:mk314247be74517e74521d2d0e949da0d20854a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:46:50.647155   39669 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.938c64d2 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:46:50.647340   39669 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.938c64d2 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:46:50.647476   39669 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:46:50.647490   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:46:50.647503   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:46:50.647518   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:46:50.647531   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:46:50.647543   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:46:50.647555   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:46:50.647567   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:46:50.647578   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:46:50.647621   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:46:50.647649   39669 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:46:50.647657   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:46:50.647679   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:46:50.647700   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:46:50.647722   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:46:50.647757   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:46:50.647782   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:46:50.647795   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:46:50.647807   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:46:50.648325   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:46:50.675015   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:46:50.699508   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:46:50.724729   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:46:50.750908   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 00:46:50.803364   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:46:50.827195   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:46:50.850972   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:46:50.875129   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:46:50.899086   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:46:50.922297   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:46:50.945924   39669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:46:50.962021   39669 ssh_runner.go:195] Run: openssl version
	I0927 00:46:50.968262   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:46:50.979689   39669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:46:50.984215   39669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:46:50.984277   39669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:46:50.990633   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:46:51.000418   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:46:51.012518   39669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:46:51.017306   39669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:46:51.017366   39669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:46:51.023417   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:46:51.033164   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:46:51.044417   39669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:46:51.049229   39669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:46:51.049284   39669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:46:51.055042   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:46:51.065109   39669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:46:51.069685   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 00:46:51.075469   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 00:46:51.081374   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 00:46:51.086742   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 00:46:51.092682   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 00:46:51.098560   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 00:46:51.103846   39669 kubeadm.go:392] StartCluster: {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:46:51.103960   39669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:46:51.104019   39669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:46:51.146164   39669 cri.go:89] found id: "e9e067a1fed15cfef10e131070af0e9b5d4f3b5e6bd6f50e2add6dfacf649c6b"
	I0927 00:46:51.146190   39669 cri.go:89] found id: "6afb57bcc4bfcdda739c48111b2456f7f6cc69bd08d6fcfb3350cd4359734fad"
	I0927 00:46:51.146195   39669 cri.go:89] found id: "09d6ef76d31a0a45df70f995dda62d413f610d2ededa8af74d94bb2e5282f290"
	I0927 00:46:51.146200   39669 cri.go:89] found id: "48bf9fa0669d9175727529363a4c49e51ac351fad94e73446f0f5dfe9ede418f"
	I0927 00:46:51.146204   39669 cri.go:89] found id: "f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427"
	I0927 00:46:51.146209   39669 cri.go:89] found id: "3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930"
	I0927 00:46:51.146213   39669 cri.go:89] found id: "805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327"
	I0927 00:46:51.146217   39669 cri.go:89] found id: "182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9"
	I0927 00:46:51.146220   39669 cri.go:89] found id: "555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad"
	I0927 00:46:51.146227   39669 cri.go:89] found id: "536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3"
	I0927 00:46:51.146231   39669 cri.go:89] found id: "5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac"
	I0927 00:46:51.146234   39669 cri.go:89] found id: "aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e"
	I0927 00:46:51.146236   39669 cri.go:89] found id: "5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db"
	I0927 00:46:51.146239   39669 cri.go:89] found id: ""
	I0927 00:46:51.146277   39669 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.459755417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1dc4d6e-3aa2-49f1-af7f-b8e16cf643f0 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.460951347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0dcff3ef-0dd6-4860-8108-4a799f9bea53 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.461509960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398151461484703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dcff3ef-0dd6-4860-8108-4a799f9bea53 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.462032432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da5b8cdb-0092-4e3c-9a2f-af2b263a1f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.462088243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da5b8cdb-0092-4e3c-9a2f-af2b263a1f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.462549447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8988c8b2e89d4cae95f059fe90bd6419c77bda9b7da567d71120d5b37d44b904,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727398063496006064,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2833aa86bec997a9eac660980344b8caf026dee1b491f539a9024dc35b3dd5,PodSandboxId:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727398049910096759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727398049209723311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727398048084828510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd819bab4c02d8f590578a99c49dc031ad0e16fdd269749d709465e158511ed,PodSandboxId:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727398030162040236,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f,PodSandboxId:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727398016720565212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6,PodSandboxId:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016721287875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1,PodSandboxId:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727398016503886203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bbea7c9e39b41a55665bdaab4478d402c76bb5d2308fe0d1e63301b1dcd2e,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727398016297449851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96,PodSandboxId:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016484380313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585,PodSandboxId:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727398016431138280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727398016511793966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc,PodSandboxId:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727398016325051081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212e
d38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727398016273461650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727397561974441361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416531871339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416548905292,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727397404353535359,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727397404131630732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727397392427504324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727397392442731508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da5b8cdb-0092-4e3c-9a2f-af2b263a1f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.504473463Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c15c9c7d-5f58-412e-b621-7ca0d7e05156 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.504850857Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hczmj,Uid:55e4dd58-9193-49ba-a2e8-1c6835898fb1,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398049739985127,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:39:18.015402395Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-631834,Uid:e2c19ca79cb21fa0ff63b2f19f35644a,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1727398030063082117,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{kubernetes.io/config.hash: e2c19ca79cb21fa0ff63b2f19f35644a,kubernetes.io/config.seen: 2024-09-27T00:46:50.275913244Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kg8kf,Uid:ee98faac-e03c-427f-9a78-2cf06d2f85cf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398016012680912,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-27T00:36:55.959296032Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-479dv,Uid:ee318b64-2274-4106-93ed-9f62151107f1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015980355235,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.971385863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-631834,Uid:71a28d11a5db44bbf2777b262efa1514,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015949677645,Labels:map[str
ing]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 71a28d11a5db44bbf2777b262efa1514,kubernetes.io/config.seen: 2024-09-27T00:36:38.456181833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-631834,Uid:afee14d1206143c4d719c111467c379b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015940079969,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
kube-apiserver.advertise-address.endpoint: 192.168.39.4:8443,kubernetes.io/config.hash: afee14d1206143c4d719c111467c379b,kubernetes.io/config.seen: 2024-09-27T00:36:38.456180608Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&PodSandboxMetadata{Name:kube-proxy-7n244,Uid:d9fac118-1b31-4cf3-bc21-a4536e45a511,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015935541968,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.473610313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&PodSandboxMetadat
a{Name:kindnet-l6ncl,Uid:3861149b-7c67-4d48-9d24-8fa08aefda61,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015921090056,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.462190063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-631834,Uid:10057dece9752ed428ddf4bfd465bb3d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015885364440,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10057dece9752ed428ddf4bfd465bb3d,kubernetes.io/config.seen: 2024-09-27T00:36:38.456182891Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&PodSandboxMetadata{Name:etcd-ha-631834,Uid:2a32cc8b63ea212ed38709daf6762cc1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015868856501,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 2a32cc8b63ea212ed38709daf6762cc1,kubernetes.io/config.seen: 2024-09-27T00:36:38.456177029Z,kubernetes.io/config.source: file,
},RuntimeHandler:,},&PodSandbox{Id:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:dbafe551-2645-4016-83f6-1133824d926d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015865283813,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePul
lPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T00:36:55.969309352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hczmj,Uid:55e4dd58-9193-49ba-a2e8-1c6835898fb1,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397558330820881,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:39:18.015402395Z,kubernetes.io/config.source: ap
i,},RuntimeHandler:,},&PodSandbox{Id:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-479dv,Uid:ee318b64-2274-4106-93ed-9f62151107f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397416284003471,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.971385863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kg8kf,Uid:ee98faac-e03c-427f-9a78-2cf06d2f85cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397416265889136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubern
etes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.959296032Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&PodSandboxMetadata{Name:kindnet-l6ncl,Uid:3861149b-7c67-4d48-9d24-8fa08aefda61,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397403804322011,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.462190063Z,kubernetes.io/config.source: api,},RuntimeH
andler:,},&PodSandbox{Id:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&PodSandboxMetadata{Name:kube-proxy-7n244,Uid:d9fac118-1b31-4cf3-bc21-a4536e45a511,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397403803732849,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.473610313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&PodSandboxMetadata{Name:etcd-ha-631834,Uid:2a32cc8b63ea212ed38709daf6762cc1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397392159704302,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD
,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 2a32cc8b63ea212ed38709daf6762cc1,kubernetes.io/config.seen: 2024-09-27T00:36:31.631709370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-631834,Uid:10057dece9752ed428ddf4bfd465bb3d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397392123638188,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10057dece9752e
d428ddf4bfd465bb3d,kubernetes.io/config.seen: 2024-09-27T00:36:31.631712772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c15c9c7d-5f58-412e-b621-7ca0d7e05156 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.505733983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f8a1a79-31c7-4c33-847d-5da88be6bec4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.505794318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f8a1a79-31c7-4c33-847d-5da88be6bec4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.508513984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8988c8b2e89d4cae95f059fe90bd6419c77bda9b7da567d71120d5b37d44b904,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727398063496006064,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2833aa86bec997a9eac660980344b8caf026dee1b491f539a9024dc35b3dd5,PodSandboxId:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727398049910096759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727398049209723311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727398048084828510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd819bab4c02d8f590578a99c49dc031ad0e16fdd269749d709465e158511ed,PodSandboxId:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727398030162040236,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f,PodSandboxId:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727398016720565212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6,PodSandboxId:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016721287875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1,PodSandboxId:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727398016503886203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bbea7c9e39b41a55665bdaab4478d402c76bb5d2308fe0d1e63301b1dcd2e,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727398016297449851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96,PodSandboxId:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016484380313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585,PodSandboxId:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727398016431138280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727398016511793966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc,PodSandboxId:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727398016325051081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212e
d38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727398016273461650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727397561974441361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416531871339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416548905292,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727397404353535359,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727397404131630732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727397392427504324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727397392442731508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f8a1a79-31c7-4c33-847d-5da88be6bec4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.517491635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aff40118-6034-4c3a-9f30-b82dad143f29 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.517587028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aff40118-6034-4c3a-9f30-b82dad143f29 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.523072212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a4abcad-85e4-46f2-8148-3ffbfd7189fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.523550653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398151523529310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a4abcad-85e4-46f2-8148-3ffbfd7189fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.524616225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71222c8a-97ad-40c8-aba4-4ce41ed94001 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.524845144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71222c8a-97ad-40c8-aba4-4ce41ed94001 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.525869625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8988c8b2e89d4cae95f059fe90bd6419c77bda9b7da567d71120d5b37d44b904,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727398063496006064,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2833aa86bec997a9eac660980344b8caf026dee1b491f539a9024dc35b3dd5,PodSandboxId:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727398049910096759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727398049209723311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727398048084828510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd819bab4c02d8f590578a99c49dc031ad0e16fdd269749d709465e158511ed,PodSandboxId:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727398030162040236,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f,PodSandboxId:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727398016720565212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6,PodSandboxId:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016721287875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1,PodSandboxId:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727398016503886203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bbea7c9e39b41a55665bdaab4478d402c76bb5d2308fe0d1e63301b1dcd2e,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727398016297449851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96,PodSandboxId:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016484380313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585,PodSandboxId:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727398016431138280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727398016511793966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc,PodSandboxId:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727398016325051081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212e
d38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727398016273461650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727397561974441361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416531871339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416548905292,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727397404353535359,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727397404131630732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727397392427504324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727397392442731508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71222c8a-97ad-40c8-aba4-4ce41ed94001 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.575968960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b511a0f6-3b1b-412c-b708-ecf401b9b2cc name=/runtime.v1.RuntimeService/Version
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.576043804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b511a0f6-3b1b-412c-b708-ecf401b9b2cc name=/runtime.v1.RuntimeService/Version
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.577442215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb13eb2b-99a7-488c-a6ef-a2a9fd2824dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.577848957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398151577827527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb13eb2b-99a7-488c-a6ef-a2a9fd2824dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.578514311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fdd7c97-d83c-4dcd-b2db-92b4692e5427 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.578572074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fdd7c97-d83c-4dcd-b2db-92b4692e5427 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:49:11 ha-631834 crio[3693]: time="2024-09-27 00:49:11.578956084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8988c8b2e89d4cae95f059fe90bd6419c77bda9b7da567d71120d5b37d44b904,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727398063496006064,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2833aa86bec997a9eac660980344b8caf026dee1b491f539a9024dc35b3dd5,PodSandboxId:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727398049910096759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727398049209723311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727398048084828510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd819bab4c02d8f590578a99c49dc031ad0e16fdd269749d709465e158511ed,PodSandboxId:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727398030162040236,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f,PodSandboxId:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727398016720565212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6,PodSandboxId:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016721287875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1,PodSandboxId:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727398016503886203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bbea7c9e39b41a55665bdaab4478d402c76bb5d2308fe0d1e63301b1dcd2e,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727398016297449851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96,PodSandboxId:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016484380313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585,PodSandboxId:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727398016431138280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727398016511793966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc,PodSandboxId:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727398016325051081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212e
d38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727398016273461650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727397561974441361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416531871339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416548905292,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727397404353535359,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727397404131630732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727397392427504324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727397392442731508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fdd7c97-d83c-4dcd-b2db-92b4692e5427 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8988c8b2e89d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   a88387509d8c4       storage-provisioner
	af2833aa86bec       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   a365021f4c440       busybox-7dff88458-hczmj
	73c2e59cd28da       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   3f5eaa7b790b5       kube-controller-manager-ha-631834
	14c982482268a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   1e81330291c03       kube-apiserver-ha-631834
	bdd819bab4c02       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   47f2ed579b1da       kube-vip-ha-631834
	b8db6d253c02d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   dd1921da801dd       coredns-7c65d6cfc9-kg8kf
	9b875ed8e00be       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   851d241b7a3fa       kindnet-l6ncl
	3608e4904bcf6       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   3f5eaa7b790b5       kube-controller-manager-ha-631834
	993366a0cc03d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   4fc98d18b24b9       kube-proxy-7n244
	69083186c23c4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   2163ce3d56b93       coredns-7c65d6cfc9-479dv
	8b7ffd9dfb628       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   2a75a0cdf184e       kube-scheduler-ha-631834
	1e553da327817       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   8b21be7811c3b       etcd-ha-631834
	d81bbea7c9e39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   a88387509d8c4       storage-provisioner
	4c930f7f8b324       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   1e81330291c03       kube-apiserver-ha-631834
	74dc20e31bc6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago        Exited              busybox                   0                   ebc71356fe886       busybox-7dff88458-hczmj
	f0d4e929a59ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   2cb3143c36c8e       coredns-7c65d6cfc9-479dv
	3c06ebd9099a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   8f236d02ca028       coredns-7c65d6cfc9-kg8kf
	805b55d391308       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   7e2d35a1098a1       kindnet-l6ncl
	182f24ac501b7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   c0f5b32248925       kube-proxy-7n244
	536c1c26f6d72       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   de8c10edafaa7       etcd-ha-631834
	5c88792788fc2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Exited              kube-scheduler            0                   74609d9fcf5f5       kube-scheduler-ha-631834
	
	
	==> coredns [3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930] <==
	[INFO] 10.244.0.4:46433 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001871874s
	[INFO] 10.244.0.4:34697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054557s
	[INFO] 10.244.1.2:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014886s
	[INFO] 10.244.2.2:34064 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136896s
	[INFO] 10.244.0.4:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149012s
	[INFO] 10.244.0.4:40833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014405s
	[INFO] 10.244.0.4:44560 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077158s
	[INFO] 10.244.0.4:46143 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000171018s
	[INFO] 10.244.1.2:56595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249758s
	[INFO] 10.244.1.2:34731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198874s
	[INFO] 10.244.1.2:47614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132758s
	[INFO] 10.244.1.2:36248 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015406s
	[INFO] 10.244.2.2:34744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136863s
	[INFO] 10.244.2.2:34972 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094616s
	[INFO] 10.244.2.2:52746 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078955s
	[INFO] 10.244.0.4:39419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113274s
	[INFO] 10.244.0.4:59554 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106105s
	[INFO] 10.244.0.4:39476 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054775s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1734&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1733&timeout=5m16s&timeoutSeconds=316&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1734": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1734": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1771&timeout=7m36s&timeoutSeconds=456&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1826072790]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:47:04.959) (total time: 10001ms):
	Trace[1826072790]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:47:14.960)
	Trace[1826072790]: [10.001502591s] [10.001502591s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:48598->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:48598->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59836->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1924818372]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:47:08.357) (total time: 10538ms):
	Trace[1924818372]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59836->10.96.0.1:443: read: connection reset by peer 10538ms (00:47:18.895)
	Trace[1924818372]: [10.538796997s] [10.538796997s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59836->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59862->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59862->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59856->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1196904685]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:47:11.589) (total time: 10049ms):
	Trace[1196904685]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59856->10.96.0.1:443: read: connection reset by peer 10049ms (00:47:21.638)
	Trace[1196904685]: [10.049956574s] [10.049956574s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59856->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427] <==
	[INFO] 10.244.1.2:49238 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002742907s
	[INFO] 10.244.1.2:42211 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125195s
	[INFO] 10.244.2.2:33655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213093s
	[INFO] 10.244.2.2:58995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171984s
	[INFO] 10.244.2.2:39964 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.2.2:60456 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227691s
	[INFO] 10.244.0.4:44954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086981s
	[INFO] 10.244.0.4:47547 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166142s
	[INFO] 10.244.0.4:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214916s
	[INFO] 10.244.0.4:52871 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284904s
	[INFO] 10.244.0.4:55577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216348s
	[INFO] 10.244.0.4:39280 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003939s
	[INFO] 10.244.1.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133643s
	[INFO] 10.244.1.2:60581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156682s
	[INFO] 10.244.1.2:47815 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000931s
	[INFO] 10.244.2.2:51419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149958s
	[INFO] 10.244.2.2:54004 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114296s
	[INFO] 10.244.2.2:50685 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087762s
	[INFO] 10.244.2.2:42257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189679s
	[INFO] 10.244.0.4:51433 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015471s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1734&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1777&timeout=9m46s&timeoutSeconds=586&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1777&timeout=7m35s&timeoutSeconds=455&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-631834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_36_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:49:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:47:41 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:47:41 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:47:41 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:47:41 +0000   Fri, 27 Sep 2024 00:36:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-631834
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c835097a3f3f47119274822a90643a61
	  System UUID:                c835097a-3f3f-4711-9274-822a90643a61
	  Boot ID:                    773a1f71-cccf-4b35-8274-d80167988c3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hczmj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-7c65d6cfc9-479dv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-kg8kf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-631834                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-l6ncl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-631834             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-631834    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7n244                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-631834             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-631834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 88s                    kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-631834 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-631834 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-631834 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-631834 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Warning  ContainerGCFailed        2m34s (x2 over 3m34s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m20s (x3 over 3m9s)   kubelet          Node ha-631834 status is now: NodeNotReady
	  Normal   RegisteredNode           95s                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal   RegisteredNode           92s                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal   RegisteredNode           38s                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	
	
	Name:               ha-631834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_37_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:49:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:48:21 +0000   Fri, 27 Sep 2024 00:47:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:48:21 +0000   Fri, 27 Sep 2024 00:47:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:48:21 +0000   Fri, 27 Sep 2024 00:47:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:48:21 +0000   Fri, 27 Sep 2024 00:47:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-631834-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 949992430050476bb475912d3f8b70cc
	  System UUID:                94999243-0050-476b-b475-912d3f8b70cc
	  Boot ID:                    aab361d9-0788-4a7f-b62d-36b5931840d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bkws6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 etcd-ha-631834-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-x7kr9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-631834-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-631834-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-x2hvh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-631834-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-631834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 89s                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-631834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  NodeNotReady             8m1s               node-controller  Node ha-631834-m02 status is now: NodeNotReady
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node ha-631834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node ha-631834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)    kubelet          Node ha-631834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           92s                node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           38s                node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	
	
	Name:               ha-631834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_38_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:38:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:49:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:48:49 +0000   Fri, 27 Sep 2024 00:48:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:48:49 +0000   Fri, 27 Sep 2024 00:48:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:48:49 +0000   Fri, 27 Sep 2024 00:48:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:48:49 +0000   Fri, 27 Sep 2024 00:48:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-631834-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a890346e739943359cb952ef92382de4
	  System UUID:                a890346e-7399-4335-9cb9-52ef92382de4
	  Boot ID:                    698a5b27-3987-4dce-9e16-a3fd92dabd9b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dhthf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 etcd-ha-631834-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-r2qxd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-631834-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-631834-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-22lcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-631834-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-631834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 30s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-631834-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	  Normal   NodeNotReady             55s                node-controller  Node ha-631834-m03 status is now: NodeNotReady
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 53s (x2 over 53s)  kubelet          Node ha-631834-m03 has been rebooted, boot id: 698a5b27-3987-4dce-9e16-a3fd92dabd9b
	  Normal   NodeHasSufficientMemory  53s (x3 over 53s)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s (x3 over 53s)  kubelet          Node ha-631834-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s (x3 over 53s)  kubelet          Node ha-631834-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             53s                kubelet          Node ha-631834-m03 status is now: NodeNotReady
	  Normal   NodeReady                53s                kubelet          Node ha-631834-m03 status is now: NodeReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-631834-m03 event: Registered Node ha-631834-m03 in Controller
	
	
	Name:               ha-631834-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_39_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:39:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:49:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:49:04 +0000   Fri, 27 Sep 2024 00:49:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:49:04 +0000   Fri, 27 Sep 2024 00:49:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:49:04 +0000   Fri, 27 Sep 2024 00:49:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:49:04 +0000   Fri, 27 Sep 2024 00:49:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-631834-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d5a4987d2674227bf93c72f5a77697a
	  System UUID:                7d5a4987-d267-4227-bf93-c72f5a77697a
	  Boot ID:                    b010a523-bced-4265-aec1-6afa6f563dda
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-667b4       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m17s
	  kube-system                 kube-proxy-klfbb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m11s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m17s (x2 over 9m18s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m17s (x2 over 9m18s)  kubelet          Node ha-631834-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m17s (x2 over 9m18s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m16s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   RegisteredNode           9m15s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   RegisteredNode           9m14s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   NodeReady                8m57s                  kubelet          Node ha-631834-m04 status is now: NodeReady
	  Normal   RegisteredNode           95s                    node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   RegisteredNode           92s                    node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   NodeNotReady             55s                    node-controller  Node ha-631834-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                    node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-631834-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-631834-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-631834-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                     kubelet          Node ha-631834-m04 has been rebooted, boot id: b010a523-bced-4265-aec1-6afa6f563dda
	  Normal   NodeReady                8s                     kubelet          Node ha-631834-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.987708] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.063056] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056033] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.197880] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.118226] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.294623] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.981056] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.053805] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.059938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.871905] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.091402] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.727187] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.324064] kauditd_printk_skb: 41 callbacks suppressed
	[Sep27 00:37] kauditd_printk_skb: 24 callbacks suppressed
	[Sep27 00:43] kauditd_printk_skb: 1 callbacks suppressed
	[Sep27 00:46] systemd-fstab-generator[3618]: Ignoring "noauto" option for root device
	[  +0.155896] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
	[  +0.190146] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.172400] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.284492] systemd-fstab-generator[3684]: Ignoring "noauto" option for root device
	[  +1.527421] systemd-fstab-generator[3781]: Ignoring "noauto" option for root device
	[  +5.564005] kauditd_printk_skb: 122 callbacks suppressed
	[Sep27 00:47] kauditd_printk_skb: 85 callbacks suppressed
	[ +34.581023] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc] <==
	{"level":"warn","ts":"2024-09-27T00:48:18.889922Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.92:2380/version","remote-member-id":"ed4a1d228ea3c582","error":"Get \"https://192.168.39.92:2380/version\": dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:18.889971Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ed4a1d228ea3c582","error":"Get \"https://192.168.39.92:2380/version\": dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:22.548091Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ed4a1d228ea3c582","rtt":"0s","error":"dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:22.548138Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ed4a1d228ea3c582","rtt":"0s","error":"dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:22.891931Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.92:2380/version","remote-member-id":"ed4a1d228ea3c582","error":"Get \"https://192.168.39.92:2380/version\": dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:22.892074Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ed4a1d228ea3c582","error":"Get \"https://192.168.39.92:2380/version\": dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:26.893662Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.92:2380/version","remote-member-id":"ed4a1d228ea3c582","error":"Get \"https://192.168.39.92:2380/version\": dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:26.893730Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ed4a1d228ea3c582","error":"Get \"https://192.168.39.92:2380/version\": dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:27.548798Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ed4a1d228ea3c582","rtt":"0s","error":"dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:27.548867Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ed4a1d228ea3c582","rtt":"0s","error":"dial tcp 192.168.39.92:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T00:48:27.876476Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"ed4a1d228ea3c582","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"1.021251ms"}
	{"level":"warn","ts":"2024-09-27T00:48:27.876551Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"bff0a92d56623d2","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"1.101192ms"}
	{"level":"info","ts":"2024-09-27T00:48:27.876854Z","caller":"traceutil/trace.go:171","msg":"trace[1571141736] transaction","detail":"{read_only:false; response_revision:2268; number_of_response:1; }","duration":"197.38025ms","start":"2024-09-27T00:48:27.679447Z","end":"2024-09-27T00:48:27.876827Z","steps":["trace[1571141736] 'process raft request'  (duration: 197.291226ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:48:27.877931Z","caller":"traceutil/trace.go:171","msg":"trace[1906345350] linearizableReadLoop","detail":"{readStateIndex:2634; appliedIndex:2635; }","duration":"156.394966ms","start":"2024-09-27T00:48:27.721517Z","end":"2024-09-27T00:48:27.877912Z","steps":["trace[1906345350] 'read index received'  (duration: 156.391879ms)","trace[1906345350] 'applied index is now lower than readState.Index'  (duration: 2.268µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T00:48:27.878296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.731195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:48:27.878728Z","caller":"traceutil/trace.go:171","msg":"trace[1744125165] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2268; }","duration":"157.182859ms","start":"2024-09-27T00:48:27.721513Z","end":"2024-09-27T00:48:27.878696Z","steps":["trace[1744125165] 'agreement among raft nodes before linearized reading'  (duration: 156.666761ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:48:28.182632Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:48:28.218134Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7ab0973fa604e492","to":"ed4a1d228ea3c582","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-27T00:48:28.218396Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:48:28.220873Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:48:28.236595Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7ab0973fa604e492","to":"ed4a1d228ea3c582","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-27T00:48:28.236701Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:48:28.241068Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:48:29.704199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.334767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-631834-m03\" ","response":"range_response_count:1 size:6062"}
	{"level":"info","ts":"2024-09-27T00:48:29.704475Z","caller":"traceutil/trace.go:171","msg":"trace[911626929] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-631834-m03; range_end:; response_count:1; response_revision:2277; }","duration":"112.568216ms","start":"2024-09-27T00:48:29.591843Z","end":"2024-09-27T00:48:29.704412Z","steps":["trace[911626929] 'range keys from in-memory index tree'  (duration: 111.127663ms)"],"step_count":1}
	
	
	==> etcd [536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3] <==
	{"level":"warn","ts":"2024-09-27T00:45:16.594070Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:45:15.748167Z","time spent":"845.898478ms","remote":"127.0.0.1:48708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" limit:10000 "}
	2024/09/27 00:45:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:45:16.745410Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16470387526003157705,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-27T00:45:16.849695Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T00:45:16.849752Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T00:45:16.849824Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7ab0973fa604e492","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-27T00:45:16.850082Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850151Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850193Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850482Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850570Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850648Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850682Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850690Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850703Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850738Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850797Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850823Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850874Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850906Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.854752Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"warn","ts":"2024-09-27T00:45:16.854779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.105363712s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-27T00:45:16.854869Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-09-27T00:45:16.854899Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-631834","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"]}
	{"level":"info","ts":"2024-09-27T00:45:16.854883Z","caller":"traceutil/trace.go:171","msg":"trace[1881335262] range","detail":"{range_begin:; range_end:; }","duration":"9.105483759s","start":"2024-09-27T00:45:07.749389Z","end":"2024-09-27T00:45:16.854873Z","steps":["trace[1881335262] 'agreement among raft nodes before linearized reading'  (duration: 9.105361847s)"],"step_count":1}
	
	
	==> kernel <==
	 00:49:12 up 13 min,  0 users,  load average: 1.05, 0.66, 0.36
	Linux ha-631834 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327] <==
	I0927 00:44:45.594701       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:44:55.593321       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:44:55.593388       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:44:55.593533       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:44:55.593592       1 main.go:299] handling current node
	I0927 00:44:55.593627       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:44:55.593632       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:44:55.593679       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:44:55.593719       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:45:05.598322       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:45:05.598549       1 main.go:299] handling current node
	I0927 00:45:05.598612       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:45:05.598636       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:45:05.598881       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:45:05.598928       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:45:05.598999       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:45:05.599018       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:45:15.593334       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:45:15.593398       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:45:15.593609       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:45:15.593635       1 main.go:299] handling current node
	I0927 00:45:15.593666       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:45:15.593675       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:45:15.593761       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:45:15.593796       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f] <==
	I0927 00:48:37.822142       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:48:47.818267       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:48:47.818408       1 main.go:299] handling current node
	I0927 00:48:47.818440       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:48:47.818459       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:48:47.818698       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:48:47.818730       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:48:47.818787       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:48:47.818805       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:48:57.818947       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:48:57.819129       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:48:57.819376       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:48:57.819418       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:48:57.819524       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:48:57.819558       1 main.go:299] handling current node
	I0927 00:48:57.819580       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:48:57.819587       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:49:07.818090       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:49:07.818145       1 main.go:299] handling current node
	I0927 00:49:07.818170       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:49:07.818175       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:49:07.818336       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:49:07.818361       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:49:07.818414       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:49:07.818429       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45] <==
	I0927 00:47:33.616432       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:47:33.696031       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 00:47:33.696180       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 00:47:33.696372       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 00:47:33.696659       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 00:47:33.696859       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 00:47:33.700861       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 00:47:33.701094       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 00:47:33.706816       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 00:47:33.709091       1 aggregator.go:171] initial CRD sync complete...
	I0927 00:47:33.709185       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 00:47:33.709268       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 00:47:33.709293       1 cache.go:39] Caches are synced for autoregister controller
	I0927 00:47:33.719876       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:47:33.719939       1 policy_source.go:224] refreshing policies
	I0927 00:47:33.731587       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 00:47:33.735274       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 00:47:33.738703       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0927 00:47:33.761822       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.92]
	I0927 00:47:33.763962       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:47:33.787820       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0927 00:47:33.796688       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0927 00:47:34.603144       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 00:47:35.211353       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4 192.168.39.92]
	W0927 00:47:55.213151       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.184 192.168.39.4]
	
	
	==> kube-apiserver [4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db] <==
	I0927 00:46:56.929656       1 options.go:228] external host was not specified, using 192.168.39.4
	I0927 00:46:56.940665       1 server.go:142] Version: v1.31.1
	I0927 00:46:56.940729       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:46:57.848567       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0927 00:46:57.863050       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:46:57.889001       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 00:46:57.889114       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 00:46:57.889601       1 instance.go:232] Using reconciler: lease
	W0927 00:47:17.845785       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 00:47:17.845784       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0927 00:47:17.890890       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108] <==
	I0927 00:46:58.074874       1 serving.go:386] Generated self-signed cert in-memory
	I0927 00:46:58.656713       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0927 00:46:58.656805       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:46:58.660041       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:46:58.660677       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:46:58.662185       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0927 00:46:58.664184       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0927 00:47:18.896902       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.4:8443/healthz\": dial tcp 192.168.39.4:8443: connect: connection refused"
	
	
	==> kube-controller-manager [73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d] <==
	I0927 00:47:50.615318       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:47:56.311917       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.816753ms"
	I0927 00:47:56.312152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.645µs"
	I0927 00:48:17.263823       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m03"
	I0927 00:48:17.264501       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-631834-m04"
	I0927 00:48:17.273172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:48:17.291744       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m03"
	I0927 00:48:17.305004       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:48:17.366413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.077733ms"
	I0927 00:48:17.368982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="311.369µs"
	I0927 00:48:19.265892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m03"
	I0927 00:48:19.281692       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m03"
	I0927 00:48:20.194034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:48:20.211869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.01µs"
	I0927 00:48:21.406435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m02"
	I0927 00:48:22.528135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:48:34.624129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:48:34.741959       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:48:40.521566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.528887ms"
	I0927 00:48:40.521707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.36µs"
	I0927 00:48:49.658431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m03"
	I0927 00:49:04.154798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-631834-m04"
	I0927 00:49:04.154960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:49:04.179541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:49:04.627074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	
	
	==> kube-proxy [182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9] <==
	E0927 00:44:07.590651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:07.590763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:07.590802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:07.590931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:07.591037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:11.880607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:11.880695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:11.880611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:11.880744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:14.952143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:14.952369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:21.098125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:21.098303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:24.168094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:24.168257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:24.168459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:24.168529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:36.456076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:36.456319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:48.743495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:48.743561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:48.743769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:48.743878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:45:16.391042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:45:16.391301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:47:00.839625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0927 00:47:03.912420       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0927 00:47:06.983089       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0927 00:47:13.127471       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0927 00:47:25.416585       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0927 00:47:43.730155       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E0927 00:47:43.730361       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:47:43.765138       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:47:43.765197       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:47:43.765300       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:47:43.768156       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:47:43.768656       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:47:43.768691       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:47:43.773684       1 config.go:199] "Starting service config controller"
	I0927 00:47:43.773751       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:47:43.773788       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:47:43.773810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:47:43.777064       1 config.go:328] "Starting node config controller"
	I0927 00:47:43.777102       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:47:43.874316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:47:43.874454       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:47:43.877382       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac] <==
	W0927 00:36:36.985650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:36:36.985711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0927 00:36:38.790470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 00:39:55.242771       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	E0927 00:39:55.242960       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 583b6ea7-5b96-43a8-9f06-70c031554c0e(kube-system/kindnet-7gjcd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7gjcd"
	E0927 00:39:55.243000       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" pod="kube-system/kindnet-7gjcd"
	I0927 00:39:55.243040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	E0927 00:44:54.157002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0927 00:45:05.084462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0927 00:45:06.510103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0927 00:45:06.666109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0927 00:45:07.517149       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0927 00:45:09.164828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0927 00:45:09.778625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0927 00:45:11.112556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0927 00:45:11.787985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0927 00:45:11.793489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0927 00:45:11.813914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0927 00:45:13.244013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0927 00:45:13.794795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0927 00:45:13.970182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0927 00:45:14.284364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	W0927 00:45:15.457909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:45:15.457990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0927 00:45:16.536979       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585] <==
	W0927 00:47:27.414492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.4:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:27.414670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.4:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:27.514485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.4:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:27.514649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.4:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:27.527725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:27.527826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:27.790191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.4:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:27.790395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.4:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:28.074665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.4:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:28.074749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.4:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:28.245438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:28.245500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:28.255196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:28.255324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:33.628044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:47:33.628511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:47:33.629202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:47:33.629395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:47:33.629719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:47:33.629819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:47:33.630251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:47:33.630501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:47:33.630623       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:47:33.630726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:47:38.406780       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:47:52 ha-631834 kubelet[1309]: I0927 00:47:52.070943    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-hczmj" podStartSLOduration=511.737634151 podStartE2EDuration="8m35.070914858s" podCreationTimestamp="2024-09-27 00:39:17 +0000 UTC" firstStartedPulling="2024-09-27 00:39:18.625635393 +0000 UTC m=+160.267612398" lastFinishedPulling="2024-09-27 00:39:21.958916089 +0000 UTC m=+163.600893105" observedRunningTime="2024-09-27 00:39:22.205620819 +0000 UTC m=+163.847597846" watchObservedRunningTime="2024-09-27 00:47:52.070914858 +0000 UTC m=+673.712891876"
	Sep 27 00:47:58 ha-631834 kubelet[1309]: E0927 00:47:58.702676    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398078702021171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:47:58 ha-631834 kubelet[1309]: E0927 00:47:58.702984    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398078702021171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:08 ha-631834 kubelet[1309]: E0927 00:48:08.704929    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398088704477930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:08 ha-631834 kubelet[1309]: E0927 00:48:08.705251    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398088704477930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:18 ha-631834 kubelet[1309]: E0927 00:48:18.707582    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398098707123205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:18 ha-631834 kubelet[1309]: E0927 00:48:18.707620    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398098707123205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:28 ha-631834 kubelet[1309]: E0927 00:48:28.710718    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398108709727872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:28 ha-631834 kubelet[1309]: E0927 00:48:28.711127    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398108709727872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:35 ha-631834 kubelet[1309]: I0927 00:48:35.485509    1309 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-631834" podUID="58aa0bcf-1f78-4ee9-8a7b-18afaf6a634c"
	Sep 27 00:48:35 ha-631834 kubelet[1309]: I0927 00:48:35.501999    1309 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-631834"
	Sep 27 00:48:38 ha-631834 kubelet[1309]: E0927 00:48:38.509410    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:48:38 ha-631834 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:48:38 ha-631834 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:48:38 ha-631834 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:48:38 ha-631834 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:48:38 ha-631834 kubelet[1309]: I0927 00:48:38.511034    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-631834" podStartSLOduration=3.511002587 podStartE2EDuration="3.511002587s" podCreationTimestamp="2024-09-27 00:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-27 00:48:38.510308571 +0000 UTC m=+720.152285616" watchObservedRunningTime="2024-09-27 00:48:38.511002587 +0000 UTC m=+720.152979611"
	Sep 27 00:48:38 ha-631834 kubelet[1309]: E0927 00:48:38.713717    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398118713452435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:38 ha-631834 kubelet[1309]: E0927 00:48:38.713787    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398118713452435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:48 ha-631834 kubelet[1309]: E0927 00:48:48.721665    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398128718017182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:48 ha-631834 kubelet[1309]: E0927 00:48:48.722775    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398128718017182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:58 ha-631834 kubelet[1309]: E0927 00:48:58.726169    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398138725546755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:48:58 ha-631834 kubelet[1309]: E0927 00:48:58.726275    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398138725546755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:49:08 ha-631834 kubelet[1309]: E0927 00:49:08.731366    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398148730887804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:49:08 ha-631834 kubelet[1309]: E0927 00:49:08.731630    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398148730887804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 00:49:11.095800   40938 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19711-14935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-631834 -n ha-631834
helpers_test.go:261: (dbg) Run:  kubectl --context ha-631834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (359.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 stop -v=7 --alsologtostderr
E0927 00:50:10.486581   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-631834 stop -v=7 --alsologtostderr: exit status 82 (2m0.447902124s)

                                                
                                                
-- stdout --
	* Stopping node "ha-631834-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:49:31.143419   41380 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:49:31.143673   41380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:49:31.143682   41380 out.go:358] Setting ErrFile to fd 2...
	I0927 00:49:31.143686   41380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:49:31.143837   41380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:49:31.144065   41380 out.go:352] Setting JSON to false
	I0927 00:49:31.144140   41380 mustload.go:65] Loading cluster: ha-631834
	I0927 00:49:31.144522   41380 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:49:31.144603   41380 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:49:31.144778   41380 mustload.go:65] Loading cluster: ha-631834
	I0927 00:49:31.144902   41380 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:49:31.144930   41380 stop.go:39] StopHost: ha-631834-m04
	I0927 00:49:31.145303   41380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:49:31.145349   41380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:49:31.160423   41380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0927 00:49:31.160908   41380 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:49:31.161475   41380 main.go:141] libmachine: Using API Version  1
	I0927 00:49:31.161497   41380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:49:31.161781   41380 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:49:31.164117   41380 out.go:177] * Stopping node "ha-631834-m04"  ...
	I0927 00:49:31.165312   41380 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 00:49:31.165340   41380 main.go:141] libmachine: (ha-631834-m04) Calling .DriverName
	I0927 00:49:31.165589   41380 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 00:49:31.165623   41380 main.go:141] libmachine: (ha-631834-m04) Calling .GetSSHHostname
	I0927 00:49:31.168443   41380 main.go:141] libmachine: (ha-631834-m04) DBG | domain ha-631834-m04 has defined MAC address 52:54:00:ec:35:28 in network mk-ha-631834
	I0927 00:49:31.168787   41380 main.go:141] libmachine: (ha-631834-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:35:28", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:48:59 +0000 UTC Type:0 Mac:52:54:00:ec:35:28 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-631834-m04 Clientid:01:52:54:00:ec:35:28}
	I0927 00:49:31.168826   41380 main.go:141] libmachine: (ha-631834-m04) DBG | domain ha-631834-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:ec:35:28 in network mk-ha-631834
	I0927 00:49:31.168942   41380 main.go:141] libmachine: (ha-631834-m04) Calling .GetSSHPort
	I0927 00:49:31.169109   41380 main.go:141] libmachine: (ha-631834-m04) Calling .GetSSHKeyPath
	I0927 00:49:31.169270   41380 main.go:141] libmachine: (ha-631834-m04) Calling .GetSSHUsername
	I0927 00:49:31.169411   41380 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834-m04/id_rsa Username:docker}
	I0927 00:49:31.249611   41380 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 00:49:31.302859   41380 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 00:49:31.355615   41380 main.go:141] libmachine: Stopping "ha-631834-m04"...
	I0927 00:49:31.355640   41380 main.go:141] libmachine: (ha-631834-m04) Calling .GetState
	I0927 00:49:31.357061   41380 main.go:141] libmachine: (ha-631834-m04) Calling .Stop
	I0927 00:49:31.360252   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 0/120
	I0927 00:49:32.362832   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 1/120
	I0927 00:49:33.364142   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 2/120
	I0927 00:49:34.365503   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 3/120
	I0927 00:49:35.366765   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 4/120
	I0927 00:49:36.368752   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 5/120
	I0927 00:49:37.370774   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 6/120
	I0927 00:49:38.372148   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 7/120
	I0927 00:49:39.373347   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 8/120
	I0927 00:49:40.374764   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 9/120
	I0927 00:49:41.377033   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 10/120
	I0927 00:49:42.378444   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 11/120
	I0927 00:49:43.379655   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 12/120
	I0927 00:49:44.380921   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 13/120
	I0927 00:49:45.382174   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 14/120
	I0927 00:49:46.384066   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 15/120
	I0927 00:49:47.385796   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 16/120
	I0927 00:49:48.387140   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 17/120
	I0927 00:49:49.388495   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 18/120
	I0927 00:49:50.389890   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 19/120
	I0927 00:49:51.391467   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 20/120
	I0927 00:49:52.392771   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 21/120
	I0927 00:49:53.394027   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 22/120
	I0927 00:49:54.395379   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 23/120
	I0927 00:49:55.396714   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 24/120
	I0927 00:49:56.398510   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 25/120
	I0927 00:49:57.399795   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 26/120
	I0927 00:49:58.401491   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 27/120
	I0927 00:49:59.403638   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 28/120
	I0927 00:50:00.405054   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 29/120
	I0927 00:50:01.407186   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 30/120
	I0927 00:50:02.408590   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 31/120
	I0927 00:50:03.410043   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 32/120
	I0927 00:50:04.411808   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 33/120
	I0927 00:50:05.413630   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 34/120
	I0927 00:50:06.415522   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 35/120
	I0927 00:50:07.416801   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 36/120
	I0927 00:50:08.418699   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 37/120
	I0927 00:50:09.420990   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 38/120
	I0927 00:50:10.422155   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 39/120
	I0927 00:50:11.424052   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 40/120
	I0927 00:50:12.425625   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 41/120
	I0927 00:50:13.426758   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 42/120
	I0927 00:50:14.427967   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 43/120
	I0927 00:50:15.429709   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 44/120
	I0927 00:50:16.431390   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 45/120
	I0927 00:50:17.432759   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 46/120
	I0927 00:50:18.434605   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 47/120
	I0927 00:50:19.435922   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 48/120
	I0927 00:50:20.437766   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 49/120
	I0927 00:50:21.439415   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 50/120
	I0927 00:50:22.440652   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 51/120
	I0927 00:50:23.441880   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 52/120
	I0927 00:50:24.443089   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 53/120
	I0927 00:50:25.444375   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 54/120
	I0927 00:50:26.446192   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 55/120
	I0927 00:50:27.447547   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 56/120
	I0927 00:50:28.449861   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 57/120
	I0927 00:50:29.450992   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 58/120
	I0927 00:50:30.452234   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 59/120
	I0927 00:50:31.453973   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 60/120
	I0927 00:50:32.455179   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 61/120
	I0927 00:50:33.456576   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 62/120
	I0927 00:50:34.457787   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 63/120
	I0927 00:50:35.458919   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 64/120
	I0927 00:50:36.460536   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 65/120
	I0927 00:50:37.461692   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 66/120
	I0927 00:50:38.463137   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 67/120
	I0927 00:50:39.464475   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 68/120
	I0927 00:50:40.465687   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 69/120
	I0927 00:50:41.467514   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 70/120
	I0927 00:50:42.469820   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 71/120
	I0927 00:50:43.471524   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 72/120
	I0927 00:50:44.472686   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 73/120
	I0927 00:50:45.473964   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 74/120
	I0927 00:50:46.475847   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 75/120
	I0927 00:50:47.477750   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 76/120
	I0927 00:50:48.479143   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 77/120
	I0927 00:50:49.480377   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 78/120
	I0927 00:50:50.481645   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 79/120
	I0927 00:50:51.483496   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 80/120
	I0927 00:50:52.484691   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 81/120
	I0927 00:50:53.485987   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 82/120
	I0927 00:50:54.487322   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 83/120
	I0927 00:50:55.488422   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 84/120
	I0927 00:50:56.490249   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 85/120
	I0927 00:50:57.491461   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 86/120
	I0927 00:50:58.493634   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 87/120
	I0927 00:50:59.494991   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 88/120
	I0927 00:51:00.496254   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 89/120
	I0927 00:51:01.498348   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 90/120
	I0927 00:51:02.499650   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 91/120
	I0927 00:51:03.501773   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 92/120
	I0927 00:51:04.503094   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 93/120
	I0927 00:51:05.504433   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 94/120
	I0927 00:51:06.506638   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 95/120
	I0927 00:51:07.507852   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 96/120
	I0927 00:51:08.509593   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 97/120
	I0927 00:51:09.510765   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 98/120
	I0927 00:51:10.511909   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 99/120
	I0927 00:51:11.513591   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 100/120
	I0927 00:51:12.514867   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 101/120
	I0927 00:51:13.516102   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 102/120
	I0927 00:51:14.517535   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 103/120
	I0927 00:51:15.518915   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 104/120
	I0927 00:51:16.520580   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 105/120
	I0927 00:51:17.521745   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 106/120
	I0927 00:51:18.523020   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 107/120
	I0927 00:51:19.524343   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 108/120
	I0927 00:51:20.525709   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 109/120
	I0927 00:51:21.527536   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 110/120
	I0927 00:51:22.528838   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 111/120
	I0927 00:51:23.530727   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 112/120
	I0927 00:51:24.532043   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 113/120
	I0927 00:51:25.533981   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 114/120
	I0927 00:51:26.535709   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 115/120
	I0927 00:51:27.537579   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 116/120
	I0927 00:51:28.538901   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 117/120
	I0927 00:51:29.540665   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 118/120
	I0927 00:51:30.542718   41380 main.go:141] libmachine: (ha-631834-m04) Waiting for machine to stop 119/120
	I0927 00:51:31.543265   41380 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 00:51:31.543348   41380 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0927 00:51:31.545155   41380 out.go:201] 
	W0927 00:51:31.546563   41380 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0927 00:51:31.546582   41380 out.go:270] * 
	* 
	W0927 00:51:31.548862   41380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 00:51:31.550266   41380 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-631834 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr: (19.015610817s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-631834 -n ha-631834
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 logs -n 25: (1.633572886s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m04 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp testdata/cp-test.txt                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834:/home/docker/cp-test_ha-631834-m04_ha-631834.txt                      |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834 sudo cat                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834.txt                                |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m02:/home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m02 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m03:/home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n                                                                | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | ha-631834-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-631834 ssh -n ha-631834-m03 sudo cat                                         | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC | 27 Sep 24 00:40 UTC |
	|         | /home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-631834 node stop m02 -v=7                                                    | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:40 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-631834 node start m02 -v=7                                                   | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-631834 -v=7                                                          | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-631834 -v=7                                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-631834 --wait=true -v=7                                                   | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:45 UTC | 27 Sep 24 00:49 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-631834                                                               | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC |                     |
	| node    | ha-631834 node delete m03 -v=7                                                  | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-631834 stop -v=7                                                             | ha-631834 | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:45:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:45:15.620382   39669 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:45:15.620522   39669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:45:15.620532   39669 out.go:358] Setting ErrFile to fd 2...
	I0927 00:45:15.620539   39669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:45:15.620751   39669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:45:15.621302   39669 out.go:352] Setting JSON to false
	I0927 00:45:15.622234   39669 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5261,"bootTime":1727392655,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:45:15.622326   39669 start.go:139] virtualization: kvm guest
	I0927 00:45:15.624489   39669 out.go:177] * [ha-631834] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:45:15.625903   39669 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:45:15.625920   39669 notify.go:220] Checking for updates...
	I0927 00:45:15.628160   39669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:45:15.629534   39669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:45:15.630776   39669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:45:15.632011   39669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:45:15.633204   39669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:45:15.634877   39669 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:45:15.634986   39669 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:45:15.635673   39669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:45:15.635736   39669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:45:15.651157   39669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0927 00:45:15.651683   39669 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:45:15.652236   39669 main.go:141] libmachine: Using API Version  1
	I0927 00:45:15.652268   39669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:45:15.652622   39669 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:45:15.652799   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:45:15.687905   39669 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 00:45:15.689273   39669 start.go:297] selected driver: kvm2
	I0927 00:45:15.689290   39669 start.go:901] validating driver "kvm2" against &{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:45:15.689480   39669 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:45:15.689941   39669 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:45:15.690022   39669 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:45:15.705881   39669 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:45:15.706672   39669 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:45:15.706712   39669 cni.go:84] Creating CNI manager for ""
	I0927 00:45:15.706778   39669 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 00:45:15.706845   39669 start.go:340] cluster config:
	{Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:45:15.707008   39669 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:45:15.708878   39669 out.go:177] * Starting "ha-631834" primary control-plane node in "ha-631834" cluster
	I0927 00:45:15.710078   39669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:45:15.710108   39669 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:45:15.710114   39669 cache.go:56] Caching tarball of preloaded images
	I0927 00:45:15.710189   39669 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 00:45:15.710200   39669 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:45:15.710312   39669 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/config.json ...
	I0927 00:45:15.710505   39669 start.go:360] acquireMachinesLock for ha-631834: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:45:15.710544   39669 start.go:364] duration metric: took 22.057µs to acquireMachinesLock for "ha-631834"
	I0927 00:45:15.710556   39669 start.go:96] Skipping create...Using existing machine configuration
	I0927 00:45:15.710563   39669 fix.go:54] fixHost starting: 
	I0927 00:45:15.710830   39669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:45:15.710860   39669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:45:15.725125   39669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40481
	I0927 00:45:15.725617   39669 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:45:15.726108   39669 main.go:141] libmachine: Using API Version  1
	I0927 00:45:15.726129   39669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:45:15.726478   39669 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:45:15.726649   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:45:15.726797   39669 main.go:141] libmachine: (ha-631834) Calling .GetState
	I0927 00:45:15.728316   39669 fix.go:112] recreateIfNeeded on ha-631834: state=Running err=<nil>
	W0927 00:45:15.728333   39669 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 00:45:15.730201   39669 out.go:177] * Updating the running kvm2 "ha-631834" VM ...
	I0927 00:45:15.731366   39669 machine.go:93] provisionDockerMachine start ...
	I0927 00:45:15.731387   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:45:15.731577   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:15.733917   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:15.734339   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:15.734365   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:15.734493   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:15.734637   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:15.734779   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:15.734893   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:15.735030   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:45:15.735252   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:45:15.735265   39669 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 00:45:15.865827   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834
	
	I0927 00:45:15.865873   39669 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:45:15.866109   39669 buildroot.go:166] provisioning hostname "ha-631834"
	I0927 00:45:15.866134   39669 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:45:15.866284   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:15.868858   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:15.869257   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:15.869289   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:15.869365   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:15.869512   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:15.869658   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:15.869882   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:15.870031   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:45:15.870196   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:45:15.870206   39669 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-631834 && echo "ha-631834" | sudo tee /etc/hostname
	I0927 00:45:15.999438   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-631834
	
	I0927 00:45:15.999464   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:16.002256   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.002596   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.002622   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.002789   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:16.002976   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.003132   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.003264   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:16.003419   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:45:16.003618   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:45:16.003634   39669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-631834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-631834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-631834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:45:16.124602   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:45:16.124633   39669 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 00:45:16.124674   39669 buildroot.go:174] setting up certificates
	I0927 00:45:16.124689   39669 provision.go:84] configureAuth start
	I0927 00:45:16.124703   39669 main.go:141] libmachine: (ha-631834) Calling .GetMachineName
	I0927 00:45:16.124958   39669 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:45:16.127674   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.128053   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.128071   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.128251   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:16.130467   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.130824   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.130850   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.130993   39669 provision.go:143] copyHostCerts
	I0927 00:45:16.131022   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:45:16.131077   39669 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 00:45:16.131089   39669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 00:45:16.131177   39669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 00:45:16.131282   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:45:16.131321   39669 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 00:45:16.131337   39669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 00:45:16.131379   39669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 00:45:16.131483   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:45:16.131506   39669 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 00:45:16.131511   39669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 00:45:16.131546   39669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 00:45:16.131611   39669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.ha-631834 san=[127.0.0.1 192.168.39.4 ha-631834 localhost minikube]
	I0927 00:45:16.246173   39669 provision.go:177] copyRemoteCerts
	I0927 00:45:16.246258   39669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:45:16.246285   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:16.248804   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.249141   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.249168   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.249338   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:16.249518   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.249717   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:16.249845   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:45:16.338669   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 00:45:16.338752   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:45:16.366057   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 00:45:16.366143   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:45:16.392473   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 00:45:16.392544   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0927 00:45:16.418486   39669 provision.go:87] duration metric: took 293.782736ms to configureAuth
	I0927 00:45:16.418514   39669 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:45:16.418809   39669 config.go:182] Loaded profile config "ha-631834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:45:16.418894   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:45:16.421316   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.421670   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:45:16.421696   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:45:16.421870   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:45:16.422053   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.422187   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:45:16.422322   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:45:16.422459   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:45:16.422660   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:45:16.422682   39669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:46:47.255178   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:46:47.255208   39669 machine.go:96] duration metric: took 1m31.5238267s to provisionDockerMachine
	I0927 00:46:47.255221   39669 start.go:293] postStartSetup for "ha-631834" (driver="kvm2")
	I0927 00:46:47.255234   39669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:46:47.255253   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.255565   39669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:46:47.255599   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.258683   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.259119   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.259146   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.259275   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.259451   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.259630   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.259763   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:46:47.346533   39669 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:46:47.350930   39669 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:46:47.350952   39669 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 00:46:47.351011   39669 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 00:46:47.351096   39669 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 00:46:47.351108   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 00:46:47.351226   39669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 00:46:47.360896   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:46:47.385556   39669 start.go:296] duration metric: took 130.322943ms for postStartSetup
	I0927 00:46:47.385594   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.385863   39669 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0927 00:46:47.385888   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.388244   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.388615   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.388638   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.388772   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.388955   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.389103   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.389210   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	W0927 00:46:47.473870   39669 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0927 00:46:47.473901   39669 fix.go:56] duration metric: took 1m31.763337076s for fixHost
	I0927 00:46:47.473927   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.476481   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.476835   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.476877   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.477009   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.477187   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.477331   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.477459   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.477588   39669 main.go:141] libmachine: Using SSH client type: native
	I0927 00:46:47.477801   39669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0927 00:46:47.477814   39669 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:46:47.592268   39669 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398007.556425815
	
	I0927 00:46:47.592290   39669 fix.go:216] guest clock: 1727398007.556425815
	I0927 00:46:47.592297   39669 fix.go:229] Guest: 2024-09-27 00:46:47.556425815 +0000 UTC Remote: 2024-09-27 00:46:47.473910129 +0000 UTC m=+91.887913645 (delta=82.515686ms)
	I0927 00:46:47.592315   39669 fix.go:200] guest clock delta is within tolerance: 82.515686ms
	I0927 00:46:47.592319   39669 start.go:83] releasing machines lock for "ha-631834", held for 1m31.881767828s
	I0927 00:46:47.592336   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.592579   39669 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:46:47.595053   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.595526   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.595559   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.595724   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.596182   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.596335   39669 main.go:141] libmachine: (ha-631834) Calling .DriverName
	I0927 00:46:47.596460   39669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:46:47.596505   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.596528   39669 ssh_runner.go:195] Run: cat /version.json
	I0927 00:46:47.596545   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHHostname
	I0927 00:46:47.598887   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.599331   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.599356   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.599374   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.599469   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.599627   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.599771   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:47.599771   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.599790   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:47.599920   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:46:47.599943   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHPort
	I0927 00:46:47.600051   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHKeyPath
	I0927 00:46:47.600171   39669 main.go:141] libmachine: (ha-631834) Calling .GetSSHUsername
	I0927 00:46:47.600248   39669 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/ha-631834/id_rsa Username:docker}
	I0927 00:46:47.688689   39669 ssh_runner.go:195] Run: systemctl --version
	I0927 00:46:47.714398   39669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:46:47.879371   39669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:46:47.886036   39669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:46:47.886106   39669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:46:47.897179   39669 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 00:46:47.897200   39669 start.go:495] detecting cgroup driver to use...
	I0927 00:46:47.897251   39669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:46:47.915667   39669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:46:47.932254   39669 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:46:47.932303   39669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:46:47.949419   39669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:46:47.965392   39669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:46:48.131365   39669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:46:48.287077   39669 docker.go:233] disabling docker service ...
	I0927 00:46:48.287148   39669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:46:48.308103   39669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:46:48.322916   39669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:46:48.493607   39669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:46:48.649560   39669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:46:48.663603   39669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:46:48.682388   39669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:46:48.682441   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.693147   39669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:46:48.693209   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.704362   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.715430   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.726552   39669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:46:48.737897   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.749082   39669 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.761612   39669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:46:48.772464   39669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:46:48.782645   39669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:46:48.792034   39669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:46:48.934287   39669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:46:49.970207   39669 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.035884557s)
	I0927 00:46:49.970237   39669 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:46:49.970288   39669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:46:49.975278   39669 start.go:563] Will wait 60s for crictl version
	I0927 00:46:49.975346   39669 ssh_runner.go:195] Run: which crictl
	I0927 00:46:49.979282   39669 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:46:50.016619   39669 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 00:46:50.016699   39669 ssh_runner.go:195] Run: crio --version
	I0927 00:46:50.045534   39669 ssh_runner.go:195] Run: crio --version
	I0927 00:46:50.077277   39669 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 00:46:50.078595   39669 main.go:141] libmachine: (ha-631834) Calling .GetIP
	I0927 00:46:50.081296   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:50.081618   39669 main.go:141] libmachine: (ha-631834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:09:a5", ip: ""} in network mk-ha-631834: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:15 +0000 UTC Type:0 Mac:52:54:00:bc:09:a5 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-631834 Clientid:01:52:54:00:bc:09:a5}
	I0927 00:46:50.081646   39669 main.go:141] libmachine: (ha-631834) DBG | domain ha-631834 has defined IP address 192.168.39.4 and MAC address 52:54:00:bc:09:a5 in network mk-ha-631834
	I0927 00:46:50.081876   39669 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:46:50.086621   39669 kubeadm.go:883] updating cluster {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:46:50.086742   39669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:46:50.086792   39669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:46:50.131171   39669 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:46:50.131190   39669 crio.go:433] Images already preloaded, skipping extraction
	I0927 00:46:50.131243   39669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:46:50.165747   39669 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:46:50.165769   39669 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:46:50.165780   39669 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.1 crio true true} ...
	I0927 00:46:50.165882   39669 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-631834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:46:50.165954   39669 ssh_runner.go:195] Run: crio config
	I0927 00:46:50.213216   39669 cni.go:84] Creating CNI manager for ""
	I0927 00:46:50.213240   39669 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 00:46:50.213249   39669 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:46:50.213300   39669 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-631834 NodeName:ha-631834 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:46:50.213486   39669 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-631834"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:46:50.213508   39669 kube-vip.go:115] generating kube-vip config ...
	I0927 00:46:50.213557   39669 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 00:46:50.225266   39669 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 00:46:50.225354   39669 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 00:46:50.225405   39669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:46:50.235071   39669 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:46:50.235137   39669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 00:46:50.244236   39669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0927 00:46:50.260449   39669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:46:50.276909   39669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0927 00:46:50.293310   39669 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 00:46:50.310086   39669 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 00:46:50.315169   39669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:46:50.457736   39669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:46:50.474066   39669 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834 for IP: 192.168.39.4
	I0927 00:46:50.474109   39669 certs.go:194] generating shared ca certs ...
	I0927 00:46:50.474129   39669 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:46:50.474269   39669 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 00:46:50.474305   39669 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 00:46:50.474314   39669 certs.go:256] generating profile certs ...
	I0927 00:46:50.474382   39669 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/client.key
	I0927 00:46:50.474409   39669 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.938c64d2
	I0927 00:46:50.474423   39669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.938c64d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.184 192.168.39.92 192.168.39.254]
	I0927 00:46:50.646860   39669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.938c64d2 ...
	I0927 00:46:50.646893   39669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.938c64d2: {Name:mk1bb4e1a7b279c05f6cee4665ac52af09113e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:46:50.647055   39669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.938c64d2 ...
	I0927 00:46:50.647067   39669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.938c64d2: {Name:mk314247be74517e74521d2d0e949da0d20854a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:46:50.647155   39669 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt.938c64d2 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt
	I0927 00:46:50.647340   39669 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key.938c64d2 -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key
	I0927 00:46:50.647476   39669 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key
	I0927 00:46:50.647490   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 00:46:50.647503   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 00:46:50.647518   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:46:50.647531   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:46:50.647543   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 00:46:50.647555   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 00:46:50.647567   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 00:46:50.647578   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 00:46:50.647621   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 00:46:50.647649   39669 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 00:46:50.647657   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 00:46:50.647679   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:46:50.647700   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:46:50.647722   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 00:46:50.647757   39669 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 00:46:50.647782   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 00:46:50.647795   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 00:46:50.647807   39669 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:46:50.648325   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:46:50.675015   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:46:50.699508   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:46:50.724729   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:46:50.750908   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 00:46:50.803364   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:46:50.827195   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:46:50.850972   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/ha-631834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 00:46:50.875129   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 00:46:50.899086   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 00:46:50.922297   39669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:46:50.945924   39669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:46:50.962021   39669 ssh_runner.go:195] Run: openssl version
	I0927 00:46:50.968262   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 00:46:50.979689   39669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 00:46:50.984215   39669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 00:46:50.984277   39669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 00:46:50.990633   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 00:46:51.000418   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 00:46:51.012518   39669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 00:46:51.017306   39669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 00:46:51.017366   39669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 00:46:51.023417   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 00:46:51.033164   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:46:51.044417   39669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:46:51.049229   39669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:46:51.049284   39669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:46:51.055042   39669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:46:51.065109   39669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:46:51.069685   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 00:46:51.075469   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 00:46:51.081374   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 00:46:51.086742   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 00:46:51.092682   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 00:46:51.098560   39669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 00:46:51.103846   39669 kubeadm.go:392] StartCluster: {Name:ha-631834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-631834 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:46:51.103960   39669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:46:51.104019   39669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:46:51.146164   39669 cri.go:89] found id: "e9e067a1fed15cfef10e131070af0e9b5d4f3b5e6bd6f50e2add6dfacf649c6b"
	I0927 00:46:51.146190   39669 cri.go:89] found id: "6afb57bcc4bfcdda739c48111b2456f7f6cc69bd08d6fcfb3350cd4359734fad"
	I0927 00:46:51.146195   39669 cri.go:89] found id: "09d6ef76d31a0a45df70f995dda62d413f610d2ededa8af74d94bb2e5282f290"
	I0927 00:46:51.146200   39669 cri.go:89] found id: "48bf9fa0669d9175727529363a4c49e51ac351fad94e73446f0f5dfe9ede418f"
	I0927 00:46:51.146204   39669 cri.go:89] found id: "f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427"
	I0927 00:46:51.146209   39669 cri.go:89] found id: "3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930"
	I0927 00:46:51.146213   39669 cri.go:89] found id: "805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327"
	I0927 00:46:51.146217   39669 cri.go:89] found id: "182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9"
	I0927 00:46:51.146220   39669 cri.go:89] found id: "555c7e8f6d5181676711d15bda6aa11fd8d84d9fff0f6e98280c72d5296aefad"
	I0927 00:46:51.146227   39669 cri.go:89] found id: "536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3"
	I0927 00:46:51.146231   39669 cri.go:89] found id: "5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac"
	I0927 00:46:51.146234   39669 cri.go:89] found id: "aa717868fa66e6c86747ecfb1ac580a98666975a9c6974d3a1037451ff37576e"
	I0927 00:46:51.146236   39669 cri.go:89] found id: "5dcaba50a39a2f812258d986d3444002c5a887ee474104a98a69129c21ec40db"
	I0927 00:46:51.146239   39669 cri.go:89] found id: ""
	I0927 00:46:51.146277   39669 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.243715467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d3ca363-c71e-476a-8417-8cd34f24e02a name=/runtime.v1.RuntimeService/Version
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.244924636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d95af472-04d8-4c4f-bb26-c11e8cdd8130 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.245416308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398311245394893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d95af472-04d8-4c4f-bb26-c11e8cdd8130 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.245904045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4d816eb-2272-4366-a27b-291b73f85922 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.245959007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4d816eb-2272-4366-a27b-291b73f85922 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.246401858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8988c8b2e89d4cae95f059fe90bd6419c77bda9b7da567d71120d5b37d44b904,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727398063496006064,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2833aa86bec997a9eac660980344b8caf026dee1b491f539a9024dc35b3dd5,PodSandboxId:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727398049910096759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727398049209723311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727398048084828510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd819bab4c02d8f590578a99c49dc031ad0e16fdd269749d709465e158511ed,PodSandboxId:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727398030162040236,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f,PodSandboxId:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727398016720565212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6,PodSandboxId:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016721287875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1,PodSandboxId:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727398016503886203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bbea7c9e39b41a55665bdaab4478d402c76bb5d2308fe0d1e63301b1dcd2e,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727398016297449851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96,PodSandboxId:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016484380313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585,PodSandboxId:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727398016431138280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727398016511793966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc,PodSandboxId:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727398016325051081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212e
d38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727398016273461650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727397561974441361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416531871339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416548905292,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727397404353535359,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727397404131630732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727397392427504324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727397392442731508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4d816eb-2272-4366-a27b-291b73f85922 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.289592625Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6e60eca-de76-4354-889f-baae53da37a9 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.289668984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6e60eca-de76-4354-889f-baae53da37a9 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.290816009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6625ac75-1502-4f36-bb0c-bf5a0d3119b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.291269301Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398311291198267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6625ac75-1502-4f36-bb0c-bf5a0d3119b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.291755350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eab6fe2e-a7b3-4d2d-a2f9-9a40bc39b16a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.291817240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eab6fe2e-a7b3-4d2d-a2f9-9a40bc39b16a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.292181737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8988c8b2e89d4cae95f059fe90bd6419c77bda9b7da567d71120d5b37d44b904,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727398063496006064,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2833aa86bec997a9eac660980344b8caf026dee1b491f539a9024dc35b3dd5,PodSandboxId:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727398049910096759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727398049209723311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727398048084828510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd819bab4c02d8f590578a99c49dc031ad0e16fdd269749d709465e158511ed,PodSandboxId:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727398030162040236,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f,PodSandboxId:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727398016720565212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6,PodSandboxId:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016721287875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1,PodSandboxId:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727398016503886203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bbea7c9e39b41a55665bdaab4478d402c76bb5d2308fe0d1e63301b1dcd2e,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727398016297449851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96,PodSandboxId:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016484380313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585,PodSandboxId:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727398016431138280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727398016511793966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc,PodSandboxId:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727398016325051081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212e
d38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727398016273461650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727397561974441361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416531871339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416548905292,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727397404353535359,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727397404131630732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727397392427504324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727397392442731508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eab6fe2e-a7b3-4d2d-a2f9-9a40bc39b16a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.321921505Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=66f5b08c-e822-42d9-a666-b82a67e187c1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.322350651Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hczmj,Uid:55e4dd58-9193-49ba-a2e8-1c6835898fb1,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398049739985127,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:39:18.015402395Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-631834,Uid:e2c19ca79cb21fa0ff63b2f19f35644a,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1727398030063082117,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{kubernetes.io/config.hash: e2c19ca79cb21fa0ff63b2f19f35644a,kubernetes.io/config.seen: 2024-09-27T00:46:50.275913244Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kg8kf,Uid:ee98faac-e03c-427f-9a78-2cf06d2f85cf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398016012680912,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-27T00:36:55.959296032Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-479dv,Uid:ee318b64-2274-4106-93ed-9f62151107f1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015980355235,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.971385863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-631834,Uid:71a28d11a5db44bbf2777b262efa1514,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015949677645,Labels:map[str
ing]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 71a28d11a5db44bbf2777b262efa1514,kubernetes.io/config.seen: 2024-09-27T00:36:38.456181833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-631834,Uid:afee14d1206143c4d719c111467c379b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015940079969,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
kube-apiserver.advertise-address.endpoint: 192.168.39.4:8443,kubernetes.io/config.hash: afee14d1206143c4d719c111467c379b,kubernetes.io/config.seen: 2024-09-27T00:36:38.456180608Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&PodSandboxMetadata{Name:kube-proxy-7n244,Uid:d9fac118-1b31-4cf3-bc21-a4536e45a511,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015935541968,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.473610313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&PodSandboxMetadat
a{Name:kindnet-l6ncl,Uid:3861149b-7c67-4d48-9d24-8fa08aefda61,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015921090056,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.462190063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-631834,Uid:10057dece9752ed428ddf4bfd465bb3d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015885364440,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10057dece9752ed428ddf4bfd465bb3d,kubernetes.io/config.seen: 2024-09-27T00:36:38.456182891Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&PodSandboxMetadata{Name:etcd-ha-631834,Uid:2a32cc8b63ea212ed38709daf6762cc1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015868856501,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 2a32cc8b63ea212ed38709daf6762cc1,kubernetes.io/config.seen: 2024-09-27T00:36:38.456177029Z,kubernetes.io/config.source: file,
},RuntimeHandler:,},&PodSandbox{Id:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:dbafe551-2645-4016-83f6-1133824d926d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727398015865283813,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePul
lPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T00:36:55.969309352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hczmj,Uid:55e4dd58-9193-49ba-a2e8-1c6835898fb1,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397558330820881,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:39:18.015402395Z,kubernetes.io/config.source: ap
i,},RuntimeHandler:,},&PodSandbox{Id:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-479dv,Uid:ee318b64-2274-4106-93ed-9f62151107f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397416284003471,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.971385863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kg8kf,Uid:ee98faac-e03c-427f-9a78-2cf06d2f85cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397416265889136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubern
etes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:55.959296032Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&PodSandboxMetadata{Name:kindnet-l6ncl,Uid:3861149b-7c67-4d48-9d24-8fa08aefda61,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397403804322011,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.462190063Z,kubernetes.io/config.source: api,},RuntimeH
andler:,},&PodSandbox{Id:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&PodSandboxMetadata{Name:kube-proxy-7n244,Uid:d9fac118-1b31-4cf3-bc21-a4536e45a511,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397403803732849,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T00:36:43.473610313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&PodSandboxMetadata{Name:etcd-ha-631834,Uid:2a32cc8b63ea212ed38709daf6762cc1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397392159704302,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD
,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 2a32cc8b63ea212ed38709daf6762cc1,kubernetes.io/config.seen: 2024-09-27T00:36:31.631709370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-631834,Uid:10057dece9752ed428ddf4bfd465bb3d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727397392123638188,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10057dece9752e
d428ddf4bfd465bb3d,kubernetes.io/config.seen: 2024-09-27T00:36:31.631712772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=66f5b08c-e822-42d9-a666-b82a67e187c1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.323651659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d9a3e99-2797-4ef6-be3b-8b3bbade008a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.323709174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d9a3e99-2797-4ef6-be3b-8b3bbade008a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.324425934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8988c8b2e89d4cae95f059fe90bd6419c77bda9b7da567d71120d5b37d44b904,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727398063496006064,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2833aa86bec997a9eac660980344b8caf026dee1b491f539a9024dc35b3dd5,PodSandboxId:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727398049910096759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727398049209723311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727398048084828510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd819bab4c02d8f590578a99c49dc031ad0e16fdd269749d709465e158511ed,PodSandboxId:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727398030162040236,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f,PodSandboxId:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727398016720565212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6,PodSandboxId:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016721287875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1,PodSandboxId:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727398016503886203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bbea7c9e39b41a55665bdaab4478d402c76bb5d2308fe0d1e63301b1dcd2e,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727398016297449851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96,PodSandboxId:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016484380313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585,PodSandboxId:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727398016431138280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727398016511793966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc,PodSandboxId:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727398016325051081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212e
d38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727398016273461650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727397561974441361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416531871339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416548905292,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727397404353535359,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727397404131630732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727397392427504324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727397392442731508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d9a3e99-2797-4ef6-be3b-8b3bbade008a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.342136006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdd977ec-cd76-431b-964f-21d5b1e990b7 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.342267266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdd977ec-cd76-431b-964f-21d5b1e990b7 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.343380804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12f8df20-c723-4015-a7ef-6c411cdab925 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.343823520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398311343802179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12f8df20-c723-4015-a7ef-6c411cdab925 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.345997217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2709e1d-e1d8-4256-81e5-e5da4726ae26 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.346055907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2709e1d-e1d8-4256-81e5-e5da4726ae26 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:51:51 ha-631834 crio[3693]: time="2024-09-27 00:51:51.346566122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8988c8b2e89d4cae95f059fe90bd6419c77bda9b7da567d71120d5b37d44b904,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727398063496006064,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2833aa86bec997a9eac660980344b8caf026dee1b491f539a9024dc35b3dd5,PodSandboxId:a365021f4c4409bc7ef02241b1e8353cacc226176a8374acf1566bd10a57b2a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727398049910096759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727398049209723311,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727398048084828510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd819bab4c02d8f590578a99c49dc031ad0e16fdd269749d709465e158511ed,PodSandboxId:47f2ed579b1da0a34f85a2ce3790a54eb441e35afd874466f304415c3642bf22,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727398030162040236,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2c19ca79cb21fa0ff63b2f19f35644a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f,PodSandboxId:851d241b7a3fa4b5d3ed7ef3daf1effcab2ef39c36598b48bc6d0cb59bb5d135,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727398016720565212,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6,PodSandboxId:dd1921da801ddbb1557b9e203c535f9fac5d58ef79d8eea5b663bd4542e7d76a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016721287875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1,PodSandboxId:4fc98d18b24b94a2a3e434010b1aab0a65fe4769deaf52d2d7abbb40be6322ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727398016503886203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bbea7c9e39b41a55665bdaab4478d402c76bb5d2308fe0d1e63301b1dcd2e,PodSandboxId:a88387509d8c47d8e1cf51f7c2c85475030c31e45457ea6774067aa5358eb8d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727398016297449851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbafe551-2645-4016-83f6-1133824d926d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96,PodSandboxId:2163ce3d56b93317faffe4240dd147a31820077f2a34e6bcda084759b0068fb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727398016484380313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585,PodSandboxId:2a75a0cdf184e9400231dc662d856f40efaa229fdab3a876dc729499f539e15a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727398016431138280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108,PodSandboxId:3f5eaa7b790b56c09c6bde23dd28d501b5c9b167eb904198c68292514134fac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727398016511793966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-631834,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 71a28d11a5db44bbf2777b262efa1514,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc,PodSandboxId:8b21be7811c3b0fe2ce57ec24aeaaa5eedfdc234f89c09b3c8f0343f20e238f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727398016325051081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212e
d38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db,PodSandboxId:1e81330291c0345d01677bc0e6f129d1c95393e00adbe8a7670e5e5776255bad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727398016273461650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afee14d1206143c4d719c111467c379b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dc20e31bc6d7c20e5d68ee7fa69cfe0328a93ccef047ea1ef82155869ad406,PodSandboxId:ebc71356fe8860c5eadadc4bfc35fe223c81b382b7fa4f7400dfdd4e30cca8e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727397561974441361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hczmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55e4dd58-9193-49ba-a2e8-1c6835898fb1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930,PodSandboxId:8f236d02ca028f9009a4efcc28e0562a8b0e8ec154921e53c93e5a527823c39a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416531871339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kg8kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee98faac-e03c-427f-9a78-2cf06d2f85cf,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427,PodSandboxId:2cb3143c36c8e5612e26df2355c120393a34014b84051ee13e5f0f641240ed61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727397416548905292,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-479dv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee318b64-2274-4106-93ed-9f62151107f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327,PodSandboxId:7e2d35a1098a1e498cdf730b14a6d4f456431c09085148024bcec56931467462,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727397404353535359,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-l6ncl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3861149b-7c67-4d48-9d24-8fa08aefda61,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9,PodSandboxId:c0f5b32248925e239a327ed4b6dc2a3da7f10accded478a3ce22050a8fe332d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727397404131630732,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7n244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9fac118-1b31-4cf3-bc21-a4536e45a511,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac,PodSandboxId:74609d9fcf5f5f8d3b57d4290bf525ef816e716d1438ea25df07d7a697e2bb1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727397392427504324,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10057dece9752ed428ddf4bfd465bb3d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3,PodSandboxId:de8c10edafaa7ba5a57a5150b492fa19b6a95a38b8f3da7e2385b723a1d4f907,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727397392442731508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-631834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a32cc8b63ea212ed38709daf6762cc1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2709e1d-e1d8-4256-81e5-e5da4726ae26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8988c8b2e89d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   a88387509d8c4       storage-provisioner
	af2833aa86bec       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   a365021f4c440       busybox-7dff88458-hczmj
	73c2e59cd28da       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   3f5eaa7b790b5       kube-controller-manager-ha-631834
	14c982482268a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   1e81330291c03       kube-apiserver-ha-631834
	bdd819bab4c02       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   47f2ed579b1da       kube-vip-ha-631834
	b8db6d253c02d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   dd1921da801dd       coredns-7c65d6cfc9-kg8kf
	9b875ed8e00be       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   851d241b7a3fa       kindnet-l6ncl
	3608e4904bcf6       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Exited              kube-controller-manager   1                   3f5eaa7b790b5       kube-controller-manager-ha-631834
	993366a0cc03d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   4fc98d18b24b9       kube-proxy-7n244
	69083186c23c4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   2163ce3d56b93       coredns-7c65d6cfc9-479dv
	8b7ffd9dfb628       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   2a75a0cdf184e       kube-scheduler-ha-631834
	1e553da327817       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   8b21be7811c3b       etcd-ha-631834
	d81bbea7c9e39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   a88387509d8c4       storage-provisioner
	4c930f7f8b324       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Exited              kube-apiserver            2                   1e81330291c03       kube-apiserver-ha-631834
	74dc20e31bc6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   ebc71356fe886       busybox-7dff88458-hczmj
	f0d4e929a59ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      14 minutes ago      Exited              coredns                   0                   2cb3143c36c8e       coredns-7c65d6cfc9-479dv
	3c06ebd9099a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      14 minutes ago      Exited              coredns                   0                   8f236d02ca028       coredns-7c65d6cfc9-kg8kf
	805b55d391308       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   7e2d35a1098a1       kindnet-l6ncl
	182f24ac501b7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   c0f5b32248925       kube-proxy-7n244
	536c1c26f6d72       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   de8c10edafaa7       etcd-ha-631834
	5c88792788fc2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   74609d9fcf5f5       kube-scheduler-ha-631834
	
	
	==> coredns [3c06ebd9099a79e7ccf81acb3dcdfa061f142b4657de196fa50e568e5b299930] <==
	[INFO] 10.244.0.4:46433 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001871874s
	[INFO] 10.244.0.4:34697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054557s
	[INFO] 10.244.1.2:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014886s
	[INFO] 10.244.2.2:34064 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136896s
	[INFO] 10.244.0.4:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149012s
	[INFO] 10.244.0.4:40833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014405s
	[INFO] 10.244.0.4:44560 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077158s
	[INFO] 10.244.0.4:46143 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000171018s
	[INFO] 10.244.1.2:56595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249758s
	[INFO] 10.244.1.2:34731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198874s
	[INFO] 10.244.1.2:47614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132758s
	[INFO] 10.244.1.2:36248 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015406s
	[INFO] 10.244.2.2:34744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136863s
	[INFO] 10.244.2.2:34972 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094616s
	[INFO] 10.244.2.2:52746 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078955s
	[INFO] 10.244.0.4:39419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113274s
	[INFO] 10.244.0.4:59554 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106105s
	[INFO] 10.244.0.4:39476 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054775s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1734&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1733&timeout=5m16s&timeoutSeconds=316&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1734": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1734": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1771&timeout=7m36s&timeoutSeconds=456&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [69083186c23c45c853d932a68dc6a9fb513bf9b26f0169046d51c75b57a58b96] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1826072790]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:47:04.959) (total time: 10001ms):
	Trace[1826072790]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:47:14.960)
	Trace[1826072790]: [10.001502591s] [10.001502591s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:48598->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:48598->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b8db6d253c02d0e9ccdb6f17e99687133896a05f908abbbb072860ad547cb0e6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59836->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1924818372]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:47:08.357) (total time: 10538ms):
	Trace[1924818372]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59836->10.96.0.1:443: read: connection reset by peer 10538ms (00:47:18.895)
	Trace[1924818372]: [10.538796997s] [10.538796997s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59836->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59862->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59862->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59856->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1196904685]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 00:47:11.589) (total time: 10049ms):
	Trace[1196904685]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59856->10.96.0.1:443: read: connection reset by peer 10049ms (00:47:21.638)
	Trace[1196904685]: [10.049956574s] [10.049956574s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:59856->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f0d4e929a59caa5d6cdfb939587ec81dce00105e7b9350778204b299cf597427] <==
	[INFO] 10.244.1.2:49238 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002742907s
	[INFO] 10.244.1.2:42211 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125195s
	[INFO] 10.244.2.2:33655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213093s
	[INFO] 10.244.2.2:58995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171984s
	[INFO] 10.244.2.2:39964 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.2.2:60456 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227691s
	[INFO] 10.244.0.4:44954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086981s
	[INFO] 10.244.0.4:47547 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166142s
	[INFO] 10.244.0.4:51196 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214916s
	[INFO] 10.244.0.4:52871 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001284904s
	[INFO] 10.244.0.4:55577 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216348s
	[INFO] 10.244.0.4:39280 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003939s
	[INFO] 10.244.1.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133643s
	[INFO] 10.244.1.2:60581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156682s
	[INFO] 10.244.1.2:47815 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000931s
	[INFO] 10.244.2.2:51419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149958s
	[INFO] 10.244.2.2:54004 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114296s
	[INFO] 10.244.2.2:50685 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087762s
	[INFO] 10.244.2.2:42257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000189679s
	[INFO] 10.244.0.4:51433 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015471s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1734&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1777&timeout=9m46s&timeoutSeconds=586&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1777&timeout=7m35s&timeoutSeconds=455&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-631834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_36_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:36:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:51:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:47:41 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:47:41 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:47:41 +0000   Fri, 27 Sep 2024 00:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:47:41 +0000   Fri, 27 Sep 2024 00:36:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-631834
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c835097a3f3f47119274822a90643a61
	  System UUID:                c835097a-3f3f-4711-9274-822a90643a61
	  Boot ID:                    773a1f71-cccf-4b35-8274-d80167988c3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hczmj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-479dv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-kg8kf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-631834                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-l6ncl                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-631834             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-631834    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-7n244                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-631834             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-631834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m7s                   kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-631834 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-631834 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-631834 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-631834 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Warning  ContainerGCFailed        5m13s (x2 over 6m13s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             4m59s (x3 over 5m48s)  kubelet          Node ha-631834 status is now: NodeNotReady
	  Normal   RegisteredNode           4m14s                  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-631834 event: Registered Node ha-631834 in Controller
	
	
	Name:               ha-631834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_37_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:51:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:48:21 +0000   Fri, 27 Sep 2024 00:47:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:48:21 +0000   Fri, 27 Sep 2024 00:47:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:48:21 +0000   Fri, 27 Sep 2024 00:47:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:48:21 +0000   Fri, 27 Sep 2024 00:47:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-631834-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 949992430050476bb475912d3f8b70cc
	  System UUID:                94999243-0050-476b-b475-912d3f8b70cc
	  Boot ID:                    aab361d9-0788-4a7f-b62d-36b5931840d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bkws6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-631834-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-x7kr9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-631834-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-631834-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-x2hvh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-631834-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-631834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-631834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-631834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-631834-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-631834-m02 status is now: NodeNotReady
	  Normal  Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m39s (x8 over 4m39s)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s (x8 over 4m39s)  kubelet          Node ha-631834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s (x7 over 4m39s)  kubelet          Node ha-631834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-631834-m02 event: Registered Node ha-631834-m02 in Controller
	
	
	Name:               ha-631834-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-631834-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=ha-631834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T00_39_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:39:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-631834-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:49:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 00:49:04 +0000   Fri, 27 Sep 2024 00:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 00:49:04 +0000   Fri, 27 Sep 2024 00:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 00:49:04 +0000   Fri, 27 Sep 2024 00:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 00:49:04 +0000   Fri, 27 Sep 2024 00:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-631834-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d5a4987d2674227bf93c72f5a77697a
	  System UUID:                7d5a4987-d267-4227-bf93-c72f5a77697a
	  Boot ID:                    b010a523-bced-4265-aec1-6afa6f563dda
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x4jxj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kindnet-667b4              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-klfbb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)      kubelet          Node ha-631834-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)      kubelet          Node ha-631834-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)      kubelet          Node ha-631834-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                    node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-631834-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m14s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   NodeNotReady             3m34s                  node-controller  Node ha-631834-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-631834-m04 event: Registered Node ha-631834-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-631834-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-631834-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-631834-m04 has been rebooted, boot id: b010a523-bced-4265-aec1-6afa6f563dda
	  Normal   NodeReady                2m47s                  kubelet          Node ha-631834-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-631834-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.987708] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.063056] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056033] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.197880] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.118226] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.294623] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.981056] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.053805] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.059938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.871905] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.091402] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.727187] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.324064] kauditd_printk_skb: 41 callbacks suppressed
	[Sep27 00:37] kauditd_printk_skb: 24 callbacks suppressed
	[Sep27 00:43] kauditd_printk_skb: 1 callbacks suppressed
	[Sep27 00:46] systemd-fstab-generator[3618]: Ignoring "noauto" option for root device
	[  +0.155896] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
	[  +0.190146] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.172400] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.284492] systemd-fstab-generator[3684]: Ignoring "noauto" option for root device
	[  +1.527421] systemd-fstab-generator[3781]: Ignoring "noauto" option for root device
	[  +5.564005] kauditd_printk_skb: 122 callbacks suppressed
	[Sep27 00:47] kauditd_printk_skb: 85 callbacks suppressed
	[ +34.581023] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [1e553da3278170117765827feaa6ada5203f508283bebb0adf9105b677a147fc] <==
	{"level":"info","ts":"2024-09-27T00:48:28.236701Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:48:28.241068Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:48:29.704199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.334767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-631834-m03\" ","response":"range_response_count:1 size:6062"}
	{"level":"info","ts":"2024-09-27T00:48:29.704475Z","caller":"traceutil/trace.go:171","msg":"trace[911626929] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-631834-m03; range_end:; response_count:1; response_revision:2277; }","duration":"112.568216ms","start":"2024-09-27T00:48:29.591843Z","end":"2024-09-27T00:48:29.704412Z","steps":["trace[911626929] 'range keys from in-memory index tree'  (duration: 111.127663ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:49:17.205303Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.92:57596","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-27T00:49:17.235515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 switched to configuration voters=(864421279240168402 8840732368152355986)"}
	{"level":"warn","ts":"2024-09-27T00:49:17.239882Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.92:57626","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-27T00:49:17.239832Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6b117bdc86acb526","local-member-id":"7ab0973fa604e492","removed-remote-peer-id":"ed4a1d228ea3c582","removed-remote-peer-urls":["https://192.168.39.92:2380"]}
	{"level":"info","ts":"2024-09-27T00:49:17.240136Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:49:17.240302Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"7ab0973fa604e492","removed-member-id":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:49:17.240496Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-09-27T00:49:17.241170Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:49:17.241340Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:49:17.252741Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:49:17.252904Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:49:17.253302Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:49:17.253689Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582","error":"context canceled"}
	{"level":"warn","ts":"2024-09-27T00:49:17.253896Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ed4a1d228ea3c582","error":"failed to read ed4a1d228ea3c582 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-27T00:49:17.254021Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:49:17.254396Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582","error":"context canceled"}
	{"level":"info","ts":"2024-09-27T00:49:17.254500Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:49:17.254537Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:49:17.254634Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"7ab0973fa604e492","removed-remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:49:17.266714Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"7ab0973fa604e492","remote-peer-id-stream-handler":"7ab0973fa604e492","remote-peer-id-from":"ed4a1d228ea3c582"}
	{"level":"warn","ts":"2024-09-27T00:49:17.270433Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"7ab0973fa604e492","remote-peer-id-stream-handler":"7ab0973fa604e492","remote-peer-id-from":"ed4a1d228ea3c582"}
	
	
	==> etcd [536c1c26f6d72525b81ce4c35ed530528a8cd001f4c530cea2e1d722325e76b3] <==
	{"level":"warn","ts":"2024-09-27T00:45:16.594070Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:45:15.748167Z","time spent":"845.898478ms","remote":"127.0.0.1:48708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" limit:10000 "}
	2024/09/27 00:45:16 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T00:45:16.745410Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16470387526003157705,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-27T00:45:16.849695Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T00:45:16.849752Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T00:45:16.849824Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7ab0973fa604e492","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-27T00:45:16.850082Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850151Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850193Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850482Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850570Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850648Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850682Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bff0a92d56623d2"}
	{"level":"info","ts":"2024-09-27T00:45:16.850690Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850703Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850738Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850797Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850823Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850874Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.850906Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ed4a1d228ea3c582"}
	{"level":"info","ts":"2024-09-27T00:45:16.854752Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"warn","ts":"2024-09-27T00:45:16.854779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.105363712s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-27T00:45:16.854869Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-09-27T00:45:16.854899Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-631834","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"]}
	{"level":"info","ts":"2024-09-27T00:45:16.854883Z","caller":"traceutil/trace.go:171","msg":"trace[1881335262] range","detail":"{range_begin:; range_end:; }","duration":"9.105483759s","start":"2024-09-27T00:45:07.749389Z","end":"2024-09-27T00:45:16.854873Z","steps":["trace[1881335262] 'agreement among raft nodes before linearized reading'  (duration: 9.105361847s)"],"step_count":1}
	
	
	==> kernel <==
	 00:51:51 up 15 min,  0 users,  load average: 0.69, 0.73, 0.44
	Linux ha-631834 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [805b55d391308302ebc0884d741fd7ca86ffe2f6feed8bf7ab229f3729f34327] <==
	I0927 00:44:45.594701       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:44:55.593321       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:44:55.593388       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:44:55.593533       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:44:55.593592       1 main.go:299] handling current node
	I0927 00:44:55.593627       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:44:55.593632       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:44:55.593679       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:44:55.593719       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:45:05.598322       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:45:05.598549       1 main.go:299] handling current node
	I0927 00:45:05.598612       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:45:05.598636       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:45:05.598881       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:45:05.598928       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	I0927 00:45:05.598999       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:45:05.599018       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:45:15.593334       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:45:15.593398       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:45:15.593609       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:45:15.593635       1 main.go:299] handling current node
	I0927 00:45:15.593666       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:45:15.593675       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:45:15.593761       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0927 00:45:15.593796       1 main.go:322] Node ha-631834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [9b875ed8e00bedb5eb1902895c4b2572101bd8ed13c0334beee29c833bdb420f] <==
	I0927 00:51:07.822076       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:51:17.824625       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:51:17.824776       1 main.go:299] handling current node
	I0927 00:51:17.824823       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:51:17.824843       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:51:17.825052       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:51:17.825082       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:51:27.826353       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:51:27.826524       1 main.go:299] handling current node
	I0927 00:51:27.826560       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:51:27.826579       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:51:27.826743       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:51:27.826866       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:51:37.827426       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:51:37.827470       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:51:37.827575       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:51:37.827581       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	I0927 00:51:37.827643       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:51:37.827664       1 main.go:299] handling current node
	I0927 00:51:47.818085       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0927 00:51:47.818138       1 main.go:299] handling current node
	I0927 00:51:47.818156       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0927 00:51:47.818162       1 main.go:322] Node ha-631834-m02 has CIDR [10.244.1.0/24] 
	I0927 00:51:47.818347       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0927 00:51:47.818401       1 main.go:322] Node ha-631834-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [14c982482268a0741c4ea4b43b359ddf56e9c7a8963d1d5b697eccb9977cce45] <==
	I0927 00:47:33.696031       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 00:47:33.696180       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 00:47:33.696372       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 00:47:33.696659       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 00:47:33.696859       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 00:47:33.700861       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 00:47:33.701094       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 00:47:33.706816       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 00:47:33.709091       1 aggregator.go:171] initial CRD sync complete...
	I0927 00:47:33.709185       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 00:47:33.709268       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 00:47:33.709293       1 cache.go:39] Caches are synced for autoregister controller
	I0927 00:47:33.719876       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:47:33.719939       1 policy_source.go:224] refreshing policies
	I0927 00:47:33.731587       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 00:47:33.735274       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 00:47:33.738703       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0927 00:47:33.761822       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.92]
	I0927 00:47:33.763962       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:47:33.787820       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0927 00:47:33.796688       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0927 00:47:34.603144       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 00:47:35.211353       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4 192.168.39.92]
	W0927 00:47:55.213151       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.184 192.168.39.4]
	W0927 00:49:35.219852       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.184 192.168.39.4]
	
	
	==> kube-apiserver [4c930f7f8b324fb82c55bdec2706385f6ba3dc086cb93f92b31f33bed9ae08db] <==
	I0927 00:46:56.929656       1 options.go:228] external host was not specified, using 192.168.39.4
	I0927 00:46:56.940665       1 server.go:142] Version: v1.31.1
	I0927 00:46:56.940729       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:46:57.848567       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0927 00:46:57.863050       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 00:46:57.889001       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 00:46:57.889114       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 00:46:57.889601       1 instance.go:232] Using reconciler: lease
	W0927 00:47:17.845785       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 00:47:17.845784       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0927 00:47:17.890890       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3608e4904bcf67c5669cc8dfae0c10b769d49c63cad46043995a67c94c29d108] <==
	I0927 00:46:58.074874       1 serving.go:386] Generated self-signed cert in-memory
	I0927 00:46:58.656713       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0927 00:46:58.656805       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:46:58.660041       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 00:46:58.660677       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 00:46:58.662185       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0927 00:46:58.664184       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0927 00:47:18.896902       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.4:8443/healthz\": dial tcp 192.168.39.4:8443: connect: connection refused"
	
	
	==> kube-controller-manager [73c2e59cd28da30c784255b37b22005602829501c488d381587497738b1a190d] <==
	I0927 00:50:05.178009       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:50:05.200938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:50:05.251588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.958451ms"
	I0927 00:50:05.253580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.35µs"
	I0927 00:50:07.545969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	I0927 00:50:10.330008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-631834-m04"
	E0927 00:50:17.010018       1 gc_controller.go:151] "Failed to get node" err="node \"ha-631834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-631834-m03"
	E0927 00:50:17.010041       1 gc_controller.go:151] "Failed to get node" err="node \"ha-631834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-631834-m03"
	E0927 00:50:17.010047       1 gc_controller.go:151] "Failed to get node" err="node \"ha-631834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-631834-m03"
	E0927 00:50:17.010052       1 gc_controller.go:151] "Failed to get node" err="node \"ha-631834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-631834-m03"
	E0927 00:50:17.010056       1 gc_controller.go:151] "Failed to get node" err="node \"ha-631834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-631834-m03"
	I0927 00:50:17.021548       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-631834-m03"
	I0927 00:50:17.046194       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-631834-m03"
	I0927 00:50:17.046353       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-631834-m03"
	I0927 00:50:17.075566       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-631834-m03"
	I0927 00:50:17.075695       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-631834-m03"
	I0927 00:50:17.108576       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-631834-m03"
	I0927 00:50:17.108652       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-631834-m03"
	I0927 00:50:17.135452       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-631834-m03"
	I0927 00:50:17.135535       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-631834-m03"
	I0927 00:50:17.161658       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-631834-m03"
	I0927 00:50:17.161698       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-r2qxd"
	I0927 00:50:17.204316       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-r2qxd"
	I0927 00:50:17.204352       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-22lcj"
	I0927 00:50:17.230585       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-22lcj"
	
	
	==> kube-proxy [182f24ac501b715adc06f080914c11407429e052bc7a726892761dd0a2d3a8e9] <==
	E0927 00:44:07.590651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:07.590763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:07.590802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:07.590931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:07.591037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:11.880607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:11.880695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:11.880611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:11.880744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:14.952143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:14.952369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:21.098125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:21.098303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:24.168094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:24.168257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:24.168459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:24.168529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:36.456076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:36.456319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:48.743495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:48.743561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-631834&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:44:48.743769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:44:48.743878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1702\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 00:45:16.391042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 00:45:16.391301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [993366a0cc03df59289c28caf8ac0f7a3eaf5ca3ee7f79410d82c5c962efc0b1] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:47:00.839625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0927 00:47:03.912420       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0927 00:47:06.983089       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0927 00:47:13.127471       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0927 00:47:25.416585       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-631834\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0927 00:47:43.730155       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E0927 00:47:43.730361       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:47:43.765138       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:47:43.765197       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:47:43.765300       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:47:43.768156       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:47:43.768656       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:47:43.768691       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:47:43.773684       1 config.go:199] "Starting service config controller"
	I0927 00:47:43.773751       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:47:43.773788       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:47:43.773810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:47:43.777064       1 config.go:328] "Starting node config controller"
	I0927 00:47:43.777102       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:47:43.874316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:47:43.874454       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:47:43.877382       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c88792788fc238aaae860e14a6c44c40020da3356d29223917fe2fb2e8901ac] <==
	W0927 00:36:36.985650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:36:36.985711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0927 00:36:38.790470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 00:39:55.242771       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	E0927 00:39:55.242960       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 583b6ea7-5b96-43a8-9f06-70c031554c0e(kube-system/kindnet-7gjcd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7gjcd"
	E0927 00:39:55.243000       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7gjcd\": pod kindnet-7gjcd is already assigned to node \"ha-631834-m04\"" pod="kube-system/kindnet-7gjcd"
	I0927 00:39:55.243040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7gjcd" node="ha-631834-m04"
	E0927 00:44:54.157002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0927 00:45:05.084462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0927 00:45:06.510103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0927 00:45:06.666109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0927 00:45:07.517149       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0927 00:45:09.164828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0927 00:45:09.778625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0927 00:45:11.112556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0927 00:45:11.787985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0927 00:45:11.793489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0927 00:45:11.813914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0927 00:45:13.244013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0927 00:45:13.794795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0927 00:45:13.970182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0927 00:45:14.284364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	W0927 00:45:15.457909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:45:15.457990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0927 00:45:16.536979       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8b7ffd9dfb6283a77a910b62e4c801f24fc7c0059c7d1b3db21ae86fdaf9b585] <==
	W0927 00:47:27.414492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.4:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:27.414670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.4:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:27.514485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.4:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:27.514649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.4:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:27.527725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:27.527826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:27.790191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.4:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:27.790395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.4:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:28.074665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.4:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:28.074749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.4:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:28.245438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:28.245500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:28.255196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0927 00:47:28.255324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0927 00:47:33.628044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:47:33.628511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:47:33.629202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:47:33.629395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:47:33.629719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:47:33.629819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:47:33.630251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:47:33.630501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:47:33.630623       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:47:33.630726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:47:38.406780       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:50:38 ha-631834 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:50:38 ha-631834 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:50:38 ha-631834 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:50:38 ha-631834 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:50:38 ha-631834 kubelet[1309]: E0927 00:50:38.756344    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398238756082600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:50:38 ha-631834 kubelet[1309]: E0927 00:50:38.756384    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398238756082600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:50:48 ha-631834 kubelet[1309]: E0927 00:50:48.758176    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398248757440791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:50:48 ha-631834 kubelet[1309]: E0927 00:50:48.758303    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398248757440791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:50:58 ha-631834 kubelet[1309]: E0927 00:50:58.759878    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398258759187185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:50:58 ha-631834 kubelet[1309]: E0927 00:50:58.760448    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398258759187185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:08 ha-631834 kubelet[1309]: E0927 00:51:08.765392    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398268764172060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:08 ha-631834 kubelet[1309]: E0927 00:51:08.765439    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398268764172060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:18 ha-631834 kubelet[1309]: E0927 00:51:18.767803    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398278767401911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:18 ha-631834 kubelet[1309]: E0927 00:51:18.768157    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398278767401911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:28 ha-631834 kubelet[1309]: E0927 00:51:28.770080    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398288769789464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:28 ha-631834 kubelet[1309]: E0927 00:51:28.770126    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398288769789464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:38 ha-631834 kubelet[1309]: E0927 00:51:38.505189    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 00:51:38 ha-631834 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 00:51:38 ha-631834 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:51:38 ha-631834 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:51:38 ha-631834 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:51:38 ha-631834 kubelet[1309]: E0927 00:51:38.772655    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398298771899710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:38 ha-631834 kubelet[1309]: E0927 00:51:38.772850    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398298771899710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:48 ha-631834 kubelet[1309]: E0927 00:51:48.775686    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398308775133985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:48 ha-631834 kubelet[1309]: E0927 00:51:48.775727    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398308775133985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 00:51:50.882169   41965 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19711-14935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-631834 -n ha-631834
helpers_test.go:261: (dbg) Run:  kubectl --context ha-631834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-833343
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-833343
E0927 01:08:01.245571   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-833343: exit status 82 (2m1.919444416s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-833343-m03"  ...
	* Stopping node "multinode-833343-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-833343" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-833343 --wait=true -v=8 --alsologtostderr
E0927 01:10:10.492663   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-833343 --wait=true -v=8 --alsologtostderr: (3m22.093198435s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-833343
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-833343 -n multinode-833343
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-833343 logs -n 25: (1.483633759s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m02:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3824164229/001/cp-test_multinode-833343-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m02:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343:/home/docker/cp-test_multinode-833343-m02_multinode-833343.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n multinode-833343 sudo cat                                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /home/docker/cp-test_multinode-833343-m02_multinode-833343.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m02:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03:/home/docker/cp-test_multinode-833343-m02_multinode-833343-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n multinode-833343-m03 sudo cat                                   | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /home/docker/cp-test_multinode-833343-m02_multinode-833343-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp testdata/cp-test.txt                                                | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3824164229/001/cp-test_multinode-833343-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343:/home/docker/cp-test_multinode-833343-m03_multinode-833343.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n multinode-833343 sudo cat                                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /home/docker/cp-test_multinode-833343-m03_multinode-833343.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02:/home/docker/cp-test_multinode-833343-m03_multinode-833343-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n multinode-833343-m02 sudo cat                                   | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /home/docker/cp-test_multinode-833343-m03_multinode-833343-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-833343 node stop m03                                                          | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	| node    | multinode-833343 node start                                                             | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:07 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-833343                                                                | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:07 UTC |                     |
	| stop    | -p multinode-833343                                                                     | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:07 UTC |                     |
	| start   | -p multinode-833343                                                                     | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:09 UTC | 27 Sep 24 01:12 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-833343                                                                | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:12 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:09:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:09:14.103789   51589 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:09:14.104044   51589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:09:14.104052   51589 out.go:358] Setting ErrFile to fd 2...
	I0927 01:09:14.104057   51589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:09:14.104225   51589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:09:14.104738   51589 out.go:352] Setting JSON to false
	I0927 01:09:14.105666   51589 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6699,"bootTime":1727392655,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:09:14.105759   51589 start.go:139] virtualization: kvm guest
	I0927 01:09:14.107888   51589 out.go:177] * [multinode-833343] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:09:14.109198   51589 notify.go:220] Checking for updates...
	I0927 01:09:14.109221   51589 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:09:14.110695   51589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:09:14.112076   51589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:09:14.113399   51589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:09:14.114658   51589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:09:14.116092   51589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:09:14.117709   51589 config.go:182] Loaded profile config "multinode-833343": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:09:14.117799   51589 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:09:14.118269   51589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:09:14.118304   51589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:09:14.133432   51589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36475
	I0927 01:09:14.133889   51589 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:09:14.134397   51589 main.go:141] libmachine: Using API Version  1
	I0927 01:09:14.134437   51589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:09:14.134771   51589 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:09:14.134916   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:09:14.169608   51589 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:09:14.170906   51589 start.go:297] selected driver: kvm2
	I0927 01:09:14.170918   51589 start.go:901] validating driver "kvm2" against &{Name:multinode-833343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-833343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.88 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:09:14.171041   51589 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:09:14.171388   51589 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:09:14.171461   51589 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:09:14.186412   51589 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:09:14.187093   51589 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:09:14.187138   51589 cni.go:84] Creating CNI manager for ""
	I0927 01:09:14.187200   51589 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0927 01:09:14.187276   51589 start.go:340] cluster config:
	{Name:multinode-833343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-833343 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.88 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:09:14.187472   51589 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:09:14.189325   51589 out.go:177] * Starting "multinode-833343" primary control-plane node in "multinode-833343" cluster
	I0927 01:09:14.190624   51589 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:09:14.190657   51589 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 01:09:14.190665   51589 cache.go:56] Caching tarball of preloaded images
	I0927 01:09:14.190747   51589 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:09:14.190760   51589 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 01:09:14.190863   51589 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/config.json ...
	I0927 01:09:14.191046   51589 start.go:360] acquireMachinesLock for multinode-833343: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:09:14.191082   51589 start.go:364] duration metric: took 21.238µs to acquireMachinesLock for "multinode-833343"
	I0927 01:09:14.191095   51589 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:09:14.191102   51589 fix.go:54] fixHost starting: 
	I0927 01:09:14.191387   51589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:09:14.191419   51589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:09:14.205487   51589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I0927 01:09:14.205977   51589 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:09:14.206470   51589 main.go:141] libmachine: Using API Version  1
	I0927 01:09:14.206489   51589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:09:14.206775   51589 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:09:14.206943   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:09:14.207094   51589 main.go:141] libmachine: (multinode-833343) Calling .GetState
	I0927 01:09:14.208547   51589 fix.go:112] recreateIfNeeded on multinode-833343: state=Running err=<nil>
	W0927 01:09:14.208565   51589 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:09:14.210515   51589 out.go:177] * Updating the running kvm2 "multinode-833343" VM ...
	I0927 01:09:14.211758   51589 machine.go:93] provisionDockerMachine start ...
	I0927 01:09:14.211774   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:09:14.211940   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:14.214354   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.214794   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.214820   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.214977   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:14.215121   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.215276   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.215412   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:14.215579   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:09:14.215759   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:09:14.215772   51589 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:09:14.316191   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-833343
	
	I0927 01:09:14.316228   51589 main.go:141] libmachine: (multinode-833343) Calling .GetMachineName
	I0927 01:09:14.316479   51589 buildroot.go:166] provisioning hostname "multinode-833343"
	I0927 01:09:14.316506   51589 main.go:141] libmachine: (multinode-833343) Calling .GetMachineName
	I0927 01:09:14.316713   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:14.319579   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.319971   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.319999   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.320134   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:14.320289   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.320406   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.320501   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:14.320643   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:09:14.320803   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:09:14.320815   51589 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-833343 && echo "multinode-833343" | sudo tee /etc/hostname
	I0927 01:09:14.439298   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-833343
	
	I0927 01:09:14.439342   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:14.442448   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.442875   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.442911   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.443053   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:14.443265   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.443499   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.443655   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:14.443821   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:09:14.444012   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:09:14.444029   51589 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-833343' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-833343/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-833343' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:09:14.544038   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:09:14.544066   51589 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:09:14.544100   51589 buildroot.go:174] setting up certificates
	I0927 01:09:14.544112   51589 provision.go:84] configureAuth start
	I0927 01:09:14.544129   51589 main.go:141] libmachine: (multinode-833343) Calling .GetMachineName
	I0927 01:09:14.544395   51589 main.go:141] libmachine: (multinode-833343) Calling .GetIP
	I0927 01:09:14.546954   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.547333   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.547370   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.547515   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:14.549535   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.549861   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.549892   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.550019   51589 provision.go:143] copyHostCerts
	I0927 01:09:14.550047   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:09:14.550082   51589 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:09:14.550093   51589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:09:14.550161   51589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:09:14.550249   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:09:14.550270   51589 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:09:14.550277   51589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:09:14.550301   51589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:09:14.550361   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:09:14.550382   51589 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:09:14.550385   51589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:09:14.550405   51589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:09:14.550467   51589 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.multinode-833343 san=[127.0.0.1 192.168.39.203 localhost minikube multinode-833343]
	I0927 01:09:15.047235   51589 provision.go:177] copyRemoteCerts
	I0927 01:09:15.047300   51589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:09:15.047347   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:15.050166   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:15.050592   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:15.050623   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:15.050796   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:15.050983   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:15.051142   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:15.051334   51589 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:09:15.134876   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 01:09:15.134957   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:09:15.161902   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 01:09:15.161981   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:09:15.188979   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 01:09:15.189048   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0927 01:09:15.216539   51589 provision.go:87] duration metric: took 672.41659ms to configureAuth
	I0927 01:09:15.216565   51589 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:09:15.216794   51589 config.go:182] Loaded profile config "multinode-833343": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:09:15.216863   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:15.219355   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:15.219707   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:15.219734   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:15.219849   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:15.220032   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:15.220163   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:15.220301   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:15.220426   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:09:15.220588   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:09:15.220602   51589 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:10:46.035695   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:10:46.035719   51589 machine.go:96] duration metric: took 1m31.823949403s to provisionDockerMachine
	I0927 01:10:46.035731   51589 start.go:293] postStartSetup for "multinode-833343" (driver="kvm2")
	I0927 01:10:46.035741   51589 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:10:46.035776   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.036051   51589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:10:46.036073   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:10:46.039286   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.039705   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.039736   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.039891   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:10:46.040065   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.040215   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:10:46.040321   51589 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:10:46.122717   51589 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:10:46.126977   51589 command_runner.go:130] > NAME=Buildroot
	I0927 01:10:46.127000   51589 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0927 01:10:46.127006   51589 command_runner.go:130] > ID=buildroot
	I0927 01:10:46.127018   51589 command_runner.go:130] > VERSION_ID=2023.02.9
	I0927 01:10:46.127025   51589 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0927 01:10:46.127051   51589 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:10:46.127064   51589 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:10:46.127132   51589 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:10:46.127216   51589 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:10:46.127228   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 01:10:46.127321   51589 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:10:46.136691   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:10:46.160928   51589 start.go:296] duration metric: took 125.185595ms for postStartSetup
	I0927 01:10:46.160975   51589 fix.go:56] duration metric: took 1m31.969870867s for fixHost
	I0927 01:10:46.161043   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:10:46.163678   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.164050   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.164090   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.164188   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:10:46.164386   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.164541   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.164690   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:10:46.164922   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:10:46.165140   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:10:46.165151   51589 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:10:46.268031   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727399446.239841222
	
	I0927 01:10:46.268054   51589 fix.go:216] guest clock: 1727399446.239841222
	I0927 01:10:46.268061   51589 fix.go:229] Guest: 2024-09-27 01:10:46.239841222 +0000 UTC Remote: 2024-09-27 01:10:46.160981439 +0000 UTC m=+92.093833083 (delta=78.859783ms)
	I0927 01:10:46.268105   51589 fix.go:200] guest clock delta is within tolerance: 78.859783ms
	I0927 01:10:46.268112   51589 start.go:83] releasing machines lock for "multinode-833343", held for 1m32.077021339s
	I0927 01:10:46.268198   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.268442   51589 main.go:141] libmachine: (multinode-833343) Calling .GetIP
	I0927 01:10:46.270854   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.271232   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.271266   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.271449   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.271966   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.272133   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.272212   51589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:10:46.272267   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:10:46.272346   51589 ssh_runner.go:195] Run: cat /version.json
	I0927 01:10:46.272364   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:10:46.274758   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.274898   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.275098   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.275123   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.275258   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:10:46.275387   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.275417   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.275423   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.275601   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:10:46.275602   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:10:46.275787   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.275783   51589 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:10:46.275934   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:10:46.276026   51589 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:10:46.356240   51589 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0927 01:10:46.356498   51589 ssh_runner.go:195] Run: systemctl --version
	I0927 01:10:46.382379   51589 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0927 01:10:46.382448   51589 command_runner.go:130] > systemd 252 (252)
	I0927 01:10:46.382483   51589 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0927 01:10:46.382546   51589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:10:46.546015   51589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 01:10:46.551950   51589 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0927 01:10:46.552004   51589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:10:46.552068   51589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:10:46.561646   51589 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 01:10:46.561672   51589 start.go:495] detecting cgroup driver to use...
	I0927 01:10:46.561751   51589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:10:46.578557   51589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:10:46.592689   51589 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:10:46.592757   51589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:10:46.608326   51589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:10:46.622571   51589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:10:46.770861   51589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:10:46.910731   51589 docker.go:233] disabling docker service ...
	I0927 01:10:46.910802   51589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:10:46.929477   51589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:10:46.943801   51589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:10:47.093250   51589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:10:47.238800   51589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:10:47.253368   51589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:10:47.272692   51589 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0927 01:10:47.273188   51589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:10:47.273243   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.284075   51589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:10:47.284126   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.294596   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.305247   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.316646   51589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:10:47.327448   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.337983   51589 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.349380   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.360282   51589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:10:47.370207   51589 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0927 01:10:47.370271   51589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:10:47.380068   51589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:10:47.520848   51589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:10:52.028818   51589 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.507941659s)
	I0927 01:10:52.028843   51589 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:10:52.028894   51589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:10:52.035356   51589 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0927 01:10:52.035393   51589 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0927 01:10:52.035420   51589 command_runner.go:130] > Device: 0,22	Inode: 1311        Links: 1
	I0927 01:10:52.035432   51589 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0927 01:10:52.035441   51589 command_runner.go:130] > Access: 2024-09-27 01:10:51.889971927 +0000
	I0927 01:10:52.035450   51589 command_runner.go:130] > Modify: 2024-09-27 01:10:51.889971927 +0000
	I0927 01:10:52.035455   51589 command_runner.go:130] > Change: 2024-09-27 01:10:51.889971927 +0000
	I0927 01:10:52.035461   51589 command_runner.go:130] >  Birth: -
	I0927 01:10:52.035479   51589 start.go:563] Will wait 60s for crictl version
	I0927 01:10:52.035520   51589 ssh_runner.go:195] Run: which crictl
	I0927 01:10:52.039372   51589 command_runner.go:130] > /usr/bin/crictl
	I0927 01:10:52.039445   51589 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:10:52.081223   51589 command_runner.go:130] > Version:  0.1.0
	I0927 01:10:52.081248   51589 command_runner.go:130] > RuntimeName:  cri-o
	I0927 01:10:52.081253   51589 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0927 01:10:52.081258   51589 command_runner.go:130] > RuntimeApiVersion:  v1
	I0927 01:10:52.081386   51589 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:10:52.081530   51589 ssh_runner.go:195] Run: crio --version
	I0927 01:10:52.114443   51589 command_runner.go:130] > crio version 1.29.1
	I0927 01:10:52.114470   51589 command_runner.go:130] > Version:        1.29.1
	I0927 01:10:52.114478   51589 command_runner.go:130] > GitCommit:      unknown
	I0927 01:10:52.114484   51589 command_runner.go:130] > GitCommitDate:  unknown
	I0927 01:10:52.114496   51589 command_runner.go:130] > GitTreeState:   clean
	I0927 01:10:52.114505   51589 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0927 01:10:52.114511   51589 command_runner.go:130] > GoVersion:      go1.21.6
	I0927 01:10:52.114516   51589 command_runner.go:130] > Compiler:       gc
	I0927 01:10:52.114523   51589 command_runner.go:130] > Platform:       linux/amd64
	I0927 01:10:52.114528   51589 command_runner.go:130] > Linkmode:       dynamic
	I0927 01:10:52.114535   51589 command_runner.go:130] > BuildTags:      
	I0927 01:10:52.114542   51589 command_runner.go:130] >   containers_image_ostree_stub
	I0927 01:10:52.114548   51589 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0927 01:10:52.114553   51589 command_runner.go:130] >   btrfs_noversion
	I0927 01:10:52.114560   51589 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0927 01:10:52.114568   51589 command_runner.go:130] >   libdm_no_deferred_remove
	I0927 01:10:52.114589   51589 command_runner.go:130] >   seccomp
	I0927 01:10:52.114598   51589 command_runner.go:130] > LDFlags:          unknown
	I0927 01:10:52.114602   51589 command_runner.go:130] > SeccompEnabled:   true
	I0927 01:10:52.114606   51589 command_runner.go:130] > AppArmorEnabled:  false
	I0927 01:10:52.114666   51589 ssh_runner.go:195] Run: crio --version
	I0927 01:10:52.144486   51589 command_runner.go:130] > crio version 1.29.1
	I0927 01:10:52.144515   51589 command_runner.go:130] > Version:        1.29.1
	I0927 01:10:52.144523   51589 command_runner.go:130] > GitCommit:      unknown
	I0927 01:10:52.144528   51589 command_runner.go:130] > GitCommitDate:  unknown
	I0927 01:10:52.144534   51589 command_runner.go:130] > GitTreeState:   clean
	I0927 01:10:52.144542   51589 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0927 01:10:52.144547   51589 command_runner.go:130] > GoVersion:      go1.21.6
	I0927 01:10:52.144553   51589 command_runner.go:130] > Compiler:       gc
	I0927 01:10:52.144559   51589 command_runner.go:130] > Platform:       linux/amd64
	I0927 01:10:52.144564   51589 command_runner.go:130] > Linkmode:       dynamic
	I0927 01:10:52.144571   51589 command_runner.go:130] > BuildTags:      
	I0927 01:10:52.144578   51589 command_runner.go:130] >   containers_image_ostree_stub
	I0927 01:10:52.144584   51589 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0927 01:10:52.144590   51589 command_runner.go:130] >   btrfs_noversion
	I0927 01:10:52.144599   51589 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0927 01:10:52.144607   51589 command_runner.go:130] >   libdm_no_deferred_remove
	I0927 01:10:52.144619   51589 command_runner.go:130] >   seccomp
	I0927 01:10:52.144627   51589 command_runner.go:130] > LDFlags:          unknown
	I0927 01:10:52.144635   51589 command_runner.go:130] > SeccompEnabled:   true
	I0927 01:10:52.144644   51589 command_runner.go:130] > AppArmorEnabled:  false
	I0927 01:10:52.146784   51589 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:10:52.148124   51589 main.go:141] libmachine: (multinode-833343) Calling .GetIP
	I0927 01:10:52.150915   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:52.151254   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:52.151283   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:52.151523   51589 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 01:10:52.156238   51589 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0927 01:10:52.156417   51589 kubeadm.go:883] updating cluster {Name:multinode-833343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-833343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.88 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:10:52.156607   51589 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:10:52.156667   51589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:10:52.198528   51589 command_runner.go:130] > {
	I0927 01:10:52.198554   51589 command_runner.go:130] >   "images": [
	I0927 01:10:52.198564   51589 command_runner.go:130] >     {
	I0927 01:10:52.198575   51589 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0927 01:10:52.198581   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.198589   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0927 01:10:52.198593   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198599   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.198621   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0927 01:10:52.198635   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0927 01:10:52.198641   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198647   51589 command_runner.go:130] >       "size": "87190579",
	I0927 01:10:52.198657   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.198664   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.198676   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.198685   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.198694   51589 command_runner.go:130] >     },
	I0927 01:10:52.198703   51589 command_runner.go:130] >     {
	I0927 01:10:52.198713   51589 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0927 01:10:52.198722   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.198728   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0927 01:10:52.198734   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198738   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.198746   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0927 01:10:52.198757   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0927 01:10:52.198765   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198774   51589 command_runner.go:130] >       "size": "1363676",
	I0927 01:10:52.198783   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.198794   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.198804   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.198813   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.198821   51589 command_runner.go:130] >     },
	I0927 01:10:52.198830   51589 command_runner.go:130] >     {
	I0927 01:10:52.198843   51589 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0927 01:10:52.198852   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.198864   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0927 01:10:52.198872   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198881   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.198892   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0927 01:10:52.198907   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0927 01:10:52.198916   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198926   51589 command_runner.go:130] >       "size": "31470524",
	I0927 01:10:52.198934   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.198941   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.198945   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.198951   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.198955   51589 command_runner.go:130] >     },
	I0927 01:10:52.198959   51589 command_runner.go:130] >     {
	I0927 01:10:52.198965   51589 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0927 01:10:52.198971   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.198975   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0927 01:10:52.198981   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198985   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.198995   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0927 01:10:52.199007   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0927 01:10:52.199012   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199016   51589 command_runner.go:130] >       "size": "63273227",
	I0927 01:10:52.199023   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.199027   51589 command_runner.go:130] >       "username": "nonroot",
	I0927 01:10:52.199033   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199038   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199044   51589 command_runner.go:130] >     },
	I0927 01:10:52.199048   51589 command_runner.go:130] >     {
	I0927 01:10:52.199056   51589 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0927 01:10:52.199062   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199066   51589 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0927 01:10:52.199072   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199076   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199085   51589 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0927 01:10:52.199094   51589 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0927 01:10:52.199099   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199104   51589 command_runner.go:130] >       "size": "149009664",
	I0927 01:10:52.199109   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199113   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.199118   51589 command_runner.go:130] >       },
	I0927 01:10:52.199121   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199125   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199139   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199143   51589 command_runner.go:130] >     },
	I0927 01:10:52.199146   51589 command_runner.go:130] >     {
	I0927 01:10:52.199152   51589 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0927 01:10:52.199158   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199164   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0927 01:10:52.199170   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199174   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199183   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0927 01:10:52.199192   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0927 01:10:52.199197   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199202   51589 command_runner.go:130] >       "size": "95237600",
	I0927 01:10:52.199207   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199212   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.199218   51589 command_runner.go:130] >       },
	I0927 01:10:52.199222   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199229   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199233   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199239   51589 command_runner.go:130] >     },
	I0927 01:10:52.199242   51589 command_runner.go:130] >     {
	I0927 01:10:52.199248   51589 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0927 01:10:52.199254   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199259   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0927 01:10:52.199262   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199266   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199278   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0927 01:10:52.199288   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0927 01:10:52.199293   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199298   51589 command_runner.go:130] >       "size": "89437508",
	I0927 01:10:52.199314   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199323   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.199332   51589 command_runner.go:130] >       },
	I0927 01:10:52.199339   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199343   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199348   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199351   51589 command_runner.go:130] >     },
	I0927 01:10:52.199356   51589 command_runner.go:130] >     {
	I0927 01:10:52.199362   51589 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0927 01:10:52.199368   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199373   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0927 01:10:52.199377   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199384   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199399   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0927 01:10:52.199408   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0927 01:10:52.199412   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199418   51589 command_runner.go:130] >       "size": "92733849",
	I0927 01:10:52.199422   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.199429   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199433   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199437   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199440   51589 command_runner.go:130] >     },
	I0927 01:10:52.199442   51589 command_runner.go:130] >     {
	I0927 01:10:52.199448   51589 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0927 01:10:52.199452   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199457   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0927 01:10:52.199460   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199465   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199476   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0927 01:10:52.199490   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0927 01:10:52.199501   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199510   51589 command_runner.go:130] >       "size": "68420934",
	I0927 01:10:52.199519   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199528   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.199535   51589 command_runner.go:130] >       },
	I0927 01:10:52.199540   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199549   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199558   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199566   51589 command_runner.go:130] >     },
	I0927 01:10:52.199573   51589 command_runner.go:130] >     {
	I0927 01:10:52.199586   51589 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0927 01:10:52.199593   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199598   51589 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0927 01:10:52.199604   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199609   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199617   51589 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0927 01:10:52.199627   51589 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0927 01:10:52.199632   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199636   51589 command_runner.go:130] >       "size": "742080",
	I0927 01:10:52.199642   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199647   51589 command_runner.go:130] >         "value": "65535"
	I0927 01:10:52.199653   51589 command_runner.go:130] >       },
	I0927 01:10:52.199657   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199663   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199666   51589 command_runner.go:130] >       "pinned": true
	I0927 01:10:52.199670   51589 command_runner.go:130] >     }
	I0927 01:10:52.199675   51589 command_runner.go:130] >   ]
	I0927 01:10:52.199678   51589 command_runner.go:130] > }
	I0927 01:10:52.199893   51589 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:10:52.199909   51589 crio.go:433] Images already preloaded, skipping extraction
	I0927 01:10:52.199959   51589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:10:52.241536   51589 command_runner.go:130] > {
	I0927 01:10:52.241556   51589 command_runner.go:130] >   "images": [
	I0927 01:10:52.241560   51589 command_runner.go:130] >     {
	I0927 01:10:52.241570   51589 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0927 01:10:52.241577   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.241586   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0927 01:10:52.241591   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241598   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.241610   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0927 01:10:52.241621   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0927 01:10:52.241626   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241632   51589 command_runner.go:130] >       "size": "87190579",
	I0927 01:10:52.241639   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.241649   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.241659   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.241669   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.241678   51589 command_runner.go:130] >     },
	I0927 01:10:52.241683   51589 command_runner.go:130] >     {
	I0927 01:10:52.241694   51589 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0927 01:10:52.241701   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.241707   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0927 01:10:52.241716   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241722   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.241736   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0927 01:10:52.241751   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0927 01:10:52.241760   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241767   51589 command_runner.go:130] >       "size": "1363676",
	I0927 01:10:52.241776   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.241785   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.241792   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.241799   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.241804   51589 command_runner.go:130] >     },
	I0927 01:10:52.241807   51589 command_runner.go:130] >     {
	I0927 01:10:52.241813   51589 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0927 01:10:52.241819   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.241824   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0927 01:10:52.241828   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241834   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.241846   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0927 01:10:52.241860   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0927 01:10:52.241865   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241874   51589 command_runner.go:130] >       "size": "31470524",
	I0927 01:10:52.241881   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.241890   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.241897   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.241907   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.241912   51589 command_runner.go:130] >     },
	I0927 01:10:52.241921   51589 command_runner.go:130] >     {
	I0927 01:10:52.241930   51589 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0927 01:10:52.241936   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.241941   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0927 01:10:52.241945   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241949   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.241958   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0927 01:10:52.241969   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0927 01:10:52.241975   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241978   51589 command_runner.go:130] >       "size": "63273227",
	I0927 01:10:52.241982   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.241986   51589 command_runner.go:130] >       "username": "nonroot",
	I0927 01:10:52.241991   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.241994   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.241998   51589 command_runner.go:130] >     },
	I0927 01:10:52.242004   51589 command_runner.go:130] >     {
	I0927 01:10:52.242012   51589 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0927 01:10:52.242016   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242023   51589 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0927 01:10:52.242027   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242031   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242037   51589 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0927 01:10:52.242046   51589 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0927 01:10:52.242051   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242055   51589 command_runner.go:130] >       "size": "149009664",
	I0927 01:10:52.242061   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242065   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.242071   51589 command_runner.go:130] >       },
	I0927 01:10:52.242075   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242081   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242085   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242088   51589 command_runner.go:130] >     },
	I0927 01:10:52.242091   51589 command_runner.go:130] >     {
	I0927 01:10:52.242097   51589 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0927 01:10:52.242103   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242108   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0927 01:10:52.242113   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242117   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242130   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0927 01:10:52.242139   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0927 01:10:52.242145   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242150   51589 command_runner.go:130] >       "size": "95237600",
	I0927 01:10:52.242156   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242160   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.242166   51589 command_runner.go:130] >       },
	I0927 01:10:52.242170   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242175   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242179   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242185   51589 command_runner.go:130] >     },
	I0927 01:10:52.242189   51589 command_runner.go:130] >     {
	I0927 01:10:52.242197   51589 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0927 01:10:52.242203   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242209   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0927 01:10:52.242214   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242217   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242227   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0927 01:10:52.242236   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0927 01:10:52.242242   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242246   51589 command_runner.go:130] >       "size": "89437508",
	I0927 01:10:52.242251   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242255   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.242261   51589 command_runner.go:130] >       },
	I0927 01:10:52.242265   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242271   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242275   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242280   51589 command_runner.go:130] >     },
	I0927 01:10:52.242283   51589 command_runner.go:130] >     {
	I0927 01:10:52.242291   51589 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0927 01:10:52.242297   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242301   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0927 01:10:52.242307   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242311   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242326   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0927 01:10:52.242334   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0927 01:10:52.242340   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242344   51589 command_runner.go:130] >       "size": "92733849",
	I0927 01:10:52.242349   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.242353   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242359   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242363   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242368   51589 command_runner.go:130] >     },
	I0927 01:10:52.242373   51589 command_runner.go:130] >     {
	I0927 01:10:52.242381   51589 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0927 01:10:52.242385   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242392   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0927 01:10:52.242395   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242399   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242406   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0927 01:10:52.242416   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0927 01:10:52.242422   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242426   51589 command_runner.go:130] >       "size": "68420934",
	I0927 01:10:52.242432   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242436   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.242441   51589 command_runner.go:130] >       },
	I0927 01:10:52.242444   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242450   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242454   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242459   51589 command_runner.go:130] >     },
	I0927 01:10:52.242463   51589 command_runner.go:130] >     {
	I0927 01:10:52.242471   51589 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0927 01:10:52.242477   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242482   51589 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0927 01:10:52.242488   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242491   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242500   51589 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0927 01:10:52.242509   51589 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0927 01:10:52.242515   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242519   51589 command_runner.go:130] >       "size": "742080",
	I0927 01:10:52.242525   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242529   51589 command_runner.go:130] >         "value": "65535"
	I0927 01:10:52.242534   51589 command_runner.go:130] >       },
	I0927 01:10:52.242538   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242544   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242547   51589 command_runner.go:130] >       "pinned": true
	I0927 01:10:52.242562   51589 command_runner.go:130] >     }
	I0927 01:10:52.242567   51589 command_runner.go:130] >   ]
	I0927 01:10:52.242571   51589 command_runner.go:130] > }
	I0927 01:10:52.242678   51589 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:10:52.242688   51589 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:10:52.242695   51589 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.1 crio true true} ...
	I0927 01:10:52.242796   51589 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-833343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-833343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:10:52.242858   51589 ssh_runner.go:195] Run: crio config
	I0927 01:10:52.286670   51589 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0927 01:10:52.286695   51589 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0927 01:10:52.286702   51589 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0927 01:10:52.286708   51589 command_runner.go:130] > #
	I0927 01:10:52.286715   51589 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0927 01:10:52.286724   51589 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0927 01:10:52.286732   51589 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0927 01:10:52.286748   51589 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0927 01:10:52.286753   51589 command_runner.go:130] > # reload'.
	I0927 01:10:52.286761   51589 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0927 01:10:52.286770   51589 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0927 01:10:52.286777   51589 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0927 01:10:52.286783   51589 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0927 01:10:52.286788   51589 command_runner.go:130] > [crio]
	I0927 01:10:52.286802   51589 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0927 01:10:52.286816   51589 command_runner.go:130] > # containers images, in this directory.
	I0927 01:10:52.286823   51589 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0927 01:10:52.286838   51589 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0927 01:10:52.286847   51589 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0927 01:10:52.286858   51589 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0927 01:10:52.286870   51589 command_runner.go:130] > # imagestore = ""
	I0927 01:10:52.286880   51589 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0927 01:10:52.286890   51589 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0927 01:10:52.286899   51589 command_runner.go:130] > storage_driver = "overlay"
	I0927 01:10:52.286906   51589 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0927 01:10:52.286912   51589 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0927 01:10:52.286919   51589 command_runner.go:130] > storage_option = [
	I0927 01:10:52.286930   51589 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0927 01:10:52.286936   51589 command_runner.go:130] > ]
	I0927 01:10:52.286946   51589 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0927 01:10:52.286956   51589 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0927 01:10:52.286964   51589 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0927 01:10:52.286975   51589 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0927 01:10:52.286985   51589 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0927 01:10:52.286993   51589 command_runner.go:130] > # always happen on a node reboot
	I0927 01:10:52.286998   51589 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0927 01:10:52.287016   51589 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0927 01:10:52.287028   51589 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0927 01:10:52.287036   51589 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0927 01:10:52.287047   51589 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0927 01:10:52.287059   51589 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0927 01:10:52.287073   51589 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0927 01:10:52.287080   51589 command_runner.go:130] > # internal_wipe = true
	I0927 01:10:52.287097   51589 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0927 01:10:52.287108   51589 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0927 01:10:52.287115   51589 command_runner.go:130] > # internal_repair = false
	I0927 01:10:52.287137   51589 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0927 01:10:52.287150   51589 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0927 01:10:52.287162   51589 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0927 01:10:52.287171   51589 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0927 01:10:52.287183   51589 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0927 01:10:52.287192   51589 command_runner.go:130] > [crio.api]
	I0927 01:10:52.287201   51589 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0927 01:10:52.287212   51589 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0927 01:10:52.287221   51589 command_runner.go:130] > # IP address on which the stream server will listen.
	I0927 01:10:52.287231   51589 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0927 01:10:52.287242   51589 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0927 01:10:52.287253   51589 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0927 01:10:52.287263   51589 command_runner.go:130] > # stream_port = "0"
	I0927 01:10:52.287272   51589 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0927 01:10:52.287280   51589 command_runner.go:130] > # stream_enable_tls = false
	I0927 01:10:52.287286   51589 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0927 01:10:52.287294   51589 command_runner.go:130] > # stream_idle_timeout = ""
	I0927 01:10:52.287313   51589 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0927 01:10:52.287327   51589 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0927 01:10:52.287333   51589 command_runner.go:130] > # minutes.
	I0927 01:10:52.287340   51589 command_runner.go:130] > # stream_tls_cert = ""
	I0927 01:10:52.287352   51589 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0927 01:10:52.287364   51589 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0927 01:10:52.287372   51589 command_runner.go:130] > # stream_tls_key = ""
	I0927 01:10:52.287378   51589 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0927 01:10:52.287386   51589 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0927 01:10:52.287404   51589 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0927 01:10:52.287410   51589 command_runner.go:130] > # stream_tls_ca = ""
	I0927 01:10:52.287420   51589 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0927 01:10:52.287432   51589 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0927 01:10:52.287442   51589 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0927 01:10:52.287451   51589 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0927 01:10:52.287458   51589 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0927 01:10:52.287469   51589 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0927 01:10:52.287474   51589 command_runner.go:130] > [crio.runtime]
	I0927 01:10:52.287485   51589 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0927 01:10:52.287496   51589 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0927 01:10:52.287503   51589 command_runner.go:130] > # "nofile=1024:2048"
	I0927 01:10:52.287512   51589 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0927 01:10:52.287522   51589 command_runner.go:130] > # default_ulimits = [
	I0927 01:10:52.287528   51589 command_runner.go:130] > # ]
	I0927 01:10:52.287540   51589 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0927 01:10:52.287549   51589 command_runner.go:130] > # no_pivot = false
	I0927 01:10:52.287560   51589 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0927 01:10:52.287573   51589 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0927 01:10:52.287583   51589 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0927 01:10:52.287593   51589 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0927 01:10:52.287604   51589 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0927 01:10:52.287618   51589 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0927 01:10:52.287630   51589 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0927 01:10:52.287638   51589 command_runner.go:130] > # Cgroup setting for conmon
	I0927 01:10:52.287650   51589 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0927 01:10:52.287659   51589 command_runner.go:130] > conmon_cgroup = "pod"
	I0927 01:10:52.287669   51589 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0927 01:10:52.287680   51589 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0927 01:10:52.287694   51589 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0927 01:10:52.287703   51589 command_runner.go:130] > conmon_env = [
	I0927 01:10:52.287715   51589 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0927 01:10:52.287723   51589 command_runner.go:130] > ]
	I0927 01:10:52.287730   51589 command_runner.go:130] > # Additional environment variables to set for all the
	I0927 01:10:52.287741   51589 command_runner.go:130] > # containers. These are overridden if set in the
	I0927 01:10:52.287751   51589 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0927 01:10:52.287759   51589 command_runner.go:130] > # default_env = [
	I0927 01:10:52.287765   51589 command_runner.go:130] > # ]
	I0927 01:10:52.287777   51589 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0927 01:10:52.287791   51589 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0927 01:10:52.287799   51589 command_runner.go:130] > # selinux = false
	I0927 01:10:52.287808   51589 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0927 01:10:52.287821   51589 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0927 01:10:52.287832   51589 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0927 01:10:52.287841   51589 command_runner.go:130] > # seccomp_profile = ""
	I0927 01:10:52.287850   51589 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0927 01:10:52.287862   51589 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0927 01:10:52.287875   51589 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0927 01:10:52.287884   51589 command_runner.go:130] > # which might increase security.
	I0927 01:10:52.287892   51589 command_runner.go:130] > # This option is currently deprecated,
	I0927 01:10:52.287903   51589 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0927 01:10:52.287911   51589 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0927 01:10:52.287930   51589 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0927 01:10:52.287945   51589 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0927 01:10:52.287956   51589 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0927 01:10:52.287968   51589 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0927 01:10:52.287980   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.287991   51589 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0927 01:10:52.288000   51589 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0927 01:10:52.288010   51589 command_runner.go:130] > # the cgroup blockio controller.
	I0927 01:10:52.288016   51589 command_runner.go:130] > # blockio_config_file = ""
	I0927 01:10:52.288029   51589 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0927 01:10:52.288038   51589 command_runner.go:130] > # blockio parameters.
	I0927 01:10:52.288045   51589 command_runner.go:130] > # blockio_reload = false
	I0927 01:10:52.288058   51589 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0927 01:10:52.288067   51589 command_runner.go:130] > # irqbalance daemon.
	I0927 01:10:52.288075   51589 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0927 01:10:52.288087   51589 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0927 01:10:52.288097   51589 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0927 01:10:52.288110   51589 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0927 01:10:52.288122   51589 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0927 01:10:52.288141   51589 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0927 01:10:52.288151   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.288158   51589 command_runner.go:130] > # rdt_config_file = ""
	I0927 01:10:52.288169   51589 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0927 01:10:52.288181   51589 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0927 01:10:52.288206   51589 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0927 01:10:52.288216   51589 command_runner.go:130] > # separate_pull_cgroup = ""
	I0927 01:10:52.288226   51589 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0927 01:10:52.288238   51589 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0927 01:10:52.288247   51589 command_runner.go:130] > # will be added.
	I0927 01:10:52.288255   51589 command_runner.go:130] > # default_capabilities = [
	I0927 01:10:52.288264   51589 command_runner.go:130] > # 	"CHOWN",
	I0927 01:10:52.288271   51589 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0927 01:10:52.288279   51589 command_runner.go:130] > # 	"FSETID",
	I0927 01:10:52.288285   51589 command_runner.go:130] > # 	"FOWNER",
	I0927 01:10:52.288293   51589 command_runner.go:130] > # 	"SETGID",
	I0927 01:10:52.288300   51589 command_runner.go:130] > # 	"SETUID",
	I0927 01:10:52.288309   51589 command_runner.go:130] > # 	"SETPCAP",
	I0927 01:10:52.288316   51589 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0927 01:10:52.288324   51589 command_runner.go:130] > # 	"KILL",
	I0927 01:10:52.288330   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288342   51589 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0927 01:10:52.288355   51589 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0927 01:10:52.288363   51589 command_runner.go:130] > # add_inheritable_capabilities = false
	I0927 01:10:52.288376   51589 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0927 01:10:52.288388   51589 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0927 01:10:52.288394   51589 command_runner.go:130] > default_sysctls = [
	I0927 01:10:52.288405   51589 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0927 01:10:52.288412   51589 command_runner.go:130] > ]
	I0927 01:10:52.288420   51589 command_runner.go:130] > # List of devices on the host that a
	I0927 01:10:52.288431   51589 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0927 01:10:52.288441   51589 command_runner.go:130] > # allowed_devices = [
	I0927 01:10:52.288447   51589 command_runner.go:130] > # 	"/dev/fuse",
	I0927 01:10:52.288454   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288462   51589 command_runner.go:130] > # List of additional devices. specified as
	I0927 01:10:52.288475   51589 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0927 01:10:52.288487   51589 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0927 01:10:52.288506   51589 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0927 01:10:52.288515   51589 command_runner.go:130] > # additional_devices = [
	I0927 01:10:52.288521   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288531   51589 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0927 01:10:52.288538   51589 command_runner.go:130] > # cdi_spec_dirs = [
	I0927 01:10:52.288544   51589 command_runner.go:130] > # 	"/etc/cdi",
	I0927 01:10:52.288555   51589 command_runner.go:130] > # 	"/var/run/cdi",
	I0927 01:10:52.288561   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288571   51589 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0927 01:10:52.288584   51589 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0927 01:10:52.288593   51589 command_runner.go:130] > # Defaults to false.
	I0927 01:10:52.288601   51589 command_runner.go:130] > # device_ownership_from_security_context = false
	I0927 01:10:52.288613   51589 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0927 01:10:52.288625   51589 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0927 01:10:52.288632   51589 command_runner.go:130] > # hooks_dir = [
	I0927 01:10:52.288642   51589 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0927 01:10:52.288648   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288658   51589 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0927 01:10:52.288670   51589 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0927 01:10:52.288681   51589 command_runner.go:130] > # its default mounts from the following two files:
	I0927 01:10:52.288689   51589 command_runner.go:130] > #
	I0927 01:10:52.288698   51589 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0927 01:10:52.288713   51589 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0927 01:10:52.288725   51589 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0927 01:10:52.288731   51589 command_runner.go:130] > #
	I0927 01:10:52.288742   51589 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0927 01:10:52.288755   51589 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0927 01:10:52.288768   51589 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0927 01:10:52.288780   51589 command_runner.go:130] > #      only add mounts it finds in this file.
	I0927 01:10:52.288786   51589 command_runner.go:130] > #
	I0927 01:10:52.288793   51589 command_runner.go:130] > # default_mounts_file = ""
	I0927 01:10:52.288804   51589 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0927 01:10:52.288817   51589 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0927 01:10:52.288828   51589 command_runner.go:130] > pids_limit = 1024
	I0927 01:10:52.288837   51589 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0927 01:10:52.288846   51589 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0927 01:10:52.288860   51589 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0927 01:10:52.288877   51589 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0927 01:10:52.288885   51589 command_runner.go:130] > # log_size_max = -1
	I0927 01:10:52.288896   51589 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0927 01:10:52.288906   51589 command_runner.go:130] > # log_to_journald = false
	I0927 01:10:52.288916   51589 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0927 01:10:52.288925   51589 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0927 01:10:52.288933   51589 command_runner.go:130] > # Path to directory for container attach sockets.
	I0927 01:10:52.288943   51589 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0927 01:10:52.288952   51589 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0927 01:10:52.288961   51589 command_runner.go:130] > # bind_mount_prefix = ""
	I0927 01:10:52.288971   51589 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0927 01:10:52.288980   51589 command_runner.go:130] > # read_only = false
	I0927 01:10:52.288990   51589 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0927 01:10:52.289002   51589 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0927 01:10:52.289009   51589 command_runner.go:130] > # live configuration reload.
	I0927 01:10:52.289019   51589 command_runner.go:130] > # log_level = "info"
	I0927 01:10:52.289027   51589 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0927 01:10:52.289038   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.289048   51589 command_runner.go:130] > # log_filter = ""
	I0927 01:10:52.289056   51589 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0927 01:10:52.289068   51589 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0927 01:10:52.289079   51589 command_runner.go:130] > # separated by comma.
	I0927 01:10:52.289090   51589 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 01:10:52.289099   51589 command_runner.go:130] > # uid_mappings = ""
	I0927 01:10:52.289108   51589 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0927 01:10:52.289121   51589 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0927 01:10:52.289134   51589 command_runner.go:130] > # separated by comma.
	I0927 01:10:52.289147   51589 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 01:10:52.289157   51589 command_runner.go:130] > # gid_mappings = ""
	I0927 01:10:52.289168   51589 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0927 01:10:52.289176   51589 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0927 01:10:52.289186   51589 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0927 01:10:52.289201   51589 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 01:10:52.289211   51589 command_runner.go:130] > # minimum_mappable_uid = -1
	I0927 01:10:52.289224   51589 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0927 01:10:52.289236   51589 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0927 01:10:52.289248   51589 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0927 01:10:52.289261   51589 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 01:10:52.289271   51589 command_runner.go:130] > # minimum_mappable_gid = -1
	I0927 01:10:52.289281   51589 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0927 01:10:52.289293   51589 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0927 01:10:52.289305   51589 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0927 01:10:52.289315   51589 command_runner.go:130] > # ctr_stop_timeout = 30
	I0927 01:10:52.289325   51589 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0927 01:10:52.289337   51589 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0927 01:10:52.289348   51589 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0927 01:10:52.289359   51589 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0927 01:10:52.289367   51589 command_runner.go:130] > drop_infra_ctr = false
	I0927 01:10:52.289377   51589 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0927 01:10:52.289388   51589 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0927 01:10:52.289402   51589 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0927 01:10:52.289412   51589 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0927 01:10:52.289423   51589 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0927 01:10:52.289434   51589 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0927 01:10:52.289446   51589 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0927 01:10:52.289454   51589 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0927 01:10:52.289459   51589 command_runner.go:130] > # shared_cpuset = ""
	I0927 01:10:52.289472   51589 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0927 01:10:52.289483   51589 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0927 01:10:52.289489   51589 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0927 01:10:52.289503   51589 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0927 01:10:52.289513   51589 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0927 01:10:52.289523   51589 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0927 01:10:52.289535   51589 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0927 01:10:52.289545   51589 command_runner.go:130] > # enable_criu_support = false
	I0927 01:10:52.289551   51589 command_runner.go:130] > # Enable/disable the generation of the container,
	I0927 01:10:52.289563   51589 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0927 01:10:52.289571   51589 command_runner.go:130] > # enable_pod_events = false
	I0927 01:10:52.289583   51589 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0927 01:10:52.289596   51589 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0927 01:10:52.289607   51589 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0927 01:10:52.289616   51589 command_runner.go:130] > # default_runtime = "runc"
	I0927 01:10:52.289624   51589 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0927 01:10:52.289637   51589 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0927 01:10:52.289649   51589 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0927 01:10:52.289660   51589 command_runner.go:130] > # creation as a file is not desired either.
	I0927 01:10:52.289675   51589 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0927 01:10:52.289686   51589 command_runner.go:130] > # the hostname is being managed dynamically.
	I0927 01:10:52.289695   51589 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0927 01:10:52.289704   51589 command_runner.go:130] > # ]
	I0927 01:10:52.289714   51589 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0927 01:10:52.289724   51589 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0927 01:10:52.289730   51589 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0927 01:10:52.289741   51589 command_runner.go:130] > # Each entry in the table should follow the format:
	I0927 01:10:52.289748   51589 command_runner.go:130] > #
	I0927 01:10:52.289756   51589 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0927 01:10:52.289767   51589 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0927 01:10:52.289800   51589 command_runner.go:130] > # runtime_type = "oci"
	I0927 01:10:52.289810   51589 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0927 01:10:52.289817   51589 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0927 01:10:52.289824   51589 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0927 01:10:52.289829   51589 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0927 01:10:52.289837   51589 command_runner.go:130] > # monitor_env = []
	I0927 01:10:52.289845   51589 command_runner.go:130] > # privileged_without_host_devices = false
	I0927 01:10:52.289855   51589 command_runner.go:130] > # allowed_annotations = []
	I0927 01:10:52.289865   51589 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0927 01:10:52.289874   51589 command_runner.go:130] > # Where:
	I0927 01:10:52.289882   51589 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0927 01:10:52.289895   51589 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0927 01:10:52.289906   51589 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0927 01:10:52.289913   51589 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0927 01:10:52.289919   51589 command_runner.go:130] > #   in $PATH.
	I0927 01:10:52.289932   51589 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0927 01:10:52.289941   51589 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0927 01:10:52.289953   51589 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0927 01:10:52.289961   51589 command_runner.go:130] > #   state.
	I0927 01:10:52.289971   51589 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0927 01:10:52.289983   51589 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0927 01:10:52.289991   51589 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0927 01:10:52.289999   51589 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0927 01:10:52.290012   51589 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0927 01:10:52.290025   51589 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0927 01:10:52.290033   51589 command_runner.go:130] > #   The currently recognized values are:
	I0927 01:10:52.290046   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0927 01:10:52.290061   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0927 01:10:52.290073   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0927 01:10:52.290084   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0927 01:10:52.290095   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0927 01:10:52.290102   51589 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0927 01:10:52.290115   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0927 01:10:52.290132   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0927 01:10:52.290144   51589 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0927 01:10:52.290157   51589 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0927 01:10:52.290167   51589 command_runner.go:130] > #   deprecated option "conmon".
	I0927 01:10:52.290180   51589 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0927 01:10:52.290190   51589 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0927 01:10:52.290200   51589 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0927 01:10:52.290206   51589 command_runner.go:130] > #   should be moved to the container's cgroup
	I0927 01:10:52.290222   51589 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0927 01:10:52.290234   51589 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0927 01:10:52.290246   51589 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0927 01:10:52.290258   51589 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0927 01:10:52.290266   51589 command_runner.go:130] > #
	I0927 01:10:52.290274   51589 command_runner.go:130] > # Using the seccomp notifier feature:
	I0927 01:10:52.290280   51589 command_runner.go:130] > #
	I0927 01:10:52.290286   51589 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0927 01:10:52.290297   51589 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0927 01:10:52.290306   51589 command_runner.go:130] > #
	I0927 01:10:52.290316   51589 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0927 01:10:52.290328   51589 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0927 01:10:52.290333   51589 command_runner.go:130] > #
	I0927 01:10:52.290346   51589 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0927 01:10:52.290354   51589 command_runner.go:130] > # feature.
	I0927 01:10:52.290359   51589 command_runner.go:130] > #
	I0927 01:10:52.290369   51589 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0927 01:10:52.290379   51589 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0927 01:10:52.290391   51589 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0927 01:10:52.290403   51589 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0927 01:10:52.290416   51589 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0927 01:10:52.290424   51589 command_runner.go:130] > #
	I0927 01:10:52.290438   51589 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0927 01:10:52.290450   51589 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0927 01:10:52.290457   51589 command_runner.go:130] > #
	I0927 01:10:52.290464   51589 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0927 01:10:52.290473   51589 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0927 01:10:52.290479   51589 command_runner.go:130] > #
	I0927 01:10:52.290492   51589 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0927 01:10:52.290502   51589 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0927 01:10:52.290511   51589 command_runner.go:130] > # limitation.
	I0927 01:10:52.290518   51589 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0927 01:10:52.290528   51589 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0927 01:10:52.290536   51589 command_runner.go:130] > runtime_type = "oci"
	I0927 01:10:52.290546   51589 command_runner.go:130] > runtime_root = "/run/runc"
	I0927 01:10:52.290555   51589 command_runner.go:130] > runtime_config_path = ""
	I0927 01:10:52.290562   51589 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0927 01:10:52.290568   51589 command_runner.go:130] > monitor_cgroup = "pod"
	I0927 01:10:52.290576   51589 command_runner.go:130] > monitor_exec_cgroup = ""
	I0927 01:10:52.290583   51589 command_runner.go:130] > monitor_env = [
	I0927 01:10:52.290595   51589 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0927 01:10:52.290600   51589 command_runner.go:130] > ]
	I0927 01:10:52.290611   51589 command_runner.go:130] > privileged_without_host_devices = false
	I0927 01:10:52.290622   51589 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0927 01:10:52.290634   51589 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0927 01:10:52.290646   51589 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0927 01:10:52.290659   51589 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0927 01:10:52.290669   51589 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0927 01:10:52.290677   51589 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0927 01:10:52.290695   51589 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0927 01:10:52.290710   51589 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0927 01:10:52.290721   51589 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0927 01:10:52.290736   51589 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0927 01:10:52.290744   51589 command_runner.go:130] > # Example:
	I0927 01:10:52.290751   51589 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0927 01:10:52.290758   51589 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0927 01:10:52.290766   51589 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0927 01:10:52.290777   51589 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0927 01:10:52.290786   51589 command_runner.go:130] > # cpuset = 0
	I0927 01:10:52.290793   51589 command_runner.go:130] > # cpushares = "0-1"
	I0927 01:10:52.290801   51589 command_runner.go:130] > # Where:
	I0927 01:10:52.290809   51589 command_runner.go:130] > # The workload name is workload-type.
	I0927 01:10:52.290823   51589 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0927 01:10:52.290834   51589 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0927 01:10:52.290842   51589 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0927 01:10:52.290851   51589 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0927 01:10:52.290864   51589 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0927 01:10:52.290875   51589 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0927 01:10:52.290888   51589 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0927 01:10:52.290898   51589 command_runner.go:130] > # Default value is set to true
	I0927 01:10:52.290905   51589 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0927 01:10:52.290917   51589 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0927 01:10:52.290925   51589 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0927 01:10:52.290930   51589 command_runner.go:130] > # Default value is set to 'false'
	I0927 01:10:52.290938   51589 command_runner.go:130] > # disable_hostport_mapping = false
	I0927 01:10:52.290950   51589 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0927 01:10:52.290956   51589 command_runner.go:130] > #
	I0927 01:10:52.290968   51589 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0927 01:10:52.290980   51589 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0927 01:10:52.290990   51589 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0927 01:10:52.291000   51589 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0927 01:10:52.291009   51589 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0927 01:10:52.291014   51589 command_runner.go:130] > [crio.image]
	I0927 01:10:52.291020   51589 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0927 01:10:52.291024   51589 command_runner.go:130] > # default_transport = "docker://"
	I0927 01:10:52.291033   51589 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0927 01:10:52.291042   51589 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0927 01:10:52.291049   51589 command_runner.go:130] > # global_auth_file = ""
	I0927 01:10:52.291059   51589 command_runner.go:130] > # The image used to instantiate infra containers.
	I0927 01:10:52.291067   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.291074   51589 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0927 01:10:52.291083   51589 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0927 01:10:52.291091   51589 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0927 01:10:52.291097   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.291104   51589 command_runner.go:130] > # pause_image_auth_file = ""
	I0927 01:10:52.291113   51589 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0927 01:10:52.291123   51589 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0927 01:10:52.291136   51589 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0927 01:10:52.291146   51589 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0927 01:10:52.291154   51589 command_runner.go:130] > # pause_command = "/pause"
	I0927 01:10:52.291163   51589 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0927 01:10:52.291173   51589 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0927 01:10:52.291182   51589 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0927 01:10:52.291188   51589 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0927 01:10:52.291195   51589 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0927 01:10:52.291204   51589 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0927 01:10:52.291212   51589 command_runner.go:130] > # pinned_images = [
	I0927 01:10:52.291218   51589 command_runner.go:130] > # ]
	I0927 01:10:52.291228   51589 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0927 01:10:52.291241   51589 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0927 01:10:52.291251   51589 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0927 01:10:52.291265   51589 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0927 01:10:52.291275   51589 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0927 01:10:52.291282   51589 command_runner.go:130] > # signature_policy = ""
	I0927 01:10:52.291294   51589 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0927 01:10:52.291319   51589 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0927 01:10:52.291330   51589 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0927 01:10:52.291341   51589 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0927 01:10:52.291353   51589 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0927 01:10:52.291363   51589 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0927 01:10:52.291376   51589 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0927 01:10:52.291389   51589 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0927 01:10:52.291399   51589 command_runner.go:130] > # changing them here.
	I0927 01:10:52.291408   51589 command_runner.go:130] > # insecure_registries = [
	I0927 01:10:52.291416   51589 command_runner.go:130] > # ]
	I0927 01:10:52.291426   51589 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0927 01:10:52.291436   51589 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0927 01:10:52.291443   51589 command_runner.go:130] > # image_volumes = "mkdir"
	I0927 01:10:52.291453   51589 command_runner.go:130] > # Temporary directory to use for storing big files
	I0927 01:10:52.291466   51589 command_runner.go:130] > # big_files_temporary_dir = ""
	I0927 01:10:52.291475   51589 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0927 01:10:52.291479   51589 command_runner.go:130] > # CNI plugins.
	I0927 01:10:52.291485   51589 command_runner.go:130] > [crio.network]
	I0927 01:10:52.291497   51589 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0927 01:10:52.291511   51589 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0927 01:10:52.291520   51589 command_runner.go:130] > # cni_default_network = ""
	I0927 01:10:52.291529   51589 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0927 01:10:52.291539   51589 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0927 01:10:52.291548   51589 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0927 01:10:52.291556   51589 command_runner.go:130] > # plugin_dirs = [
	I0927 01:10:52.291561   51589 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0927 01:10:52.291566   51589 command_runner.go:130] > # ]
	I0927 01:10:52.291575   51589 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0927 01:10:52.291584   51589 command_runner.go:130] > [crio.metrics]
	I0927 01:10:52.291591   51589 command_runner.go:130] > # Globally enable or disable metrics support.
	I0927 01:10:52.291601   51589 command_runner.go:130] > enable_metrics = true
	I0927 01:10:52.291612   51589 command_runner.go:130] > # Specify enabled metrics collectors.
	I0927 01:10:52.291622   51589 command_runner.go:130] > # Per default all metrics are enabled.
	I0927 01:10:52.291634   51589 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0927 01:10:52.291646   51589 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0927 01:10:52.291655   51589 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0927 01:10:52.291664   51589 command_runner.go:130] > # metrics_collectors = [
	I0927 01:10:52.291673   51589 command_runner.go:130] > # 	"operations",
	I0927 01:10:52.291684   51589 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0927 01:10:52.291694   51589 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0927 01:10:52.291703   51589 command_runner.go:130] > # 	"operations_errors",
	I0927 01:10:52.291712   51589 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0927 01:10:52.291721   51589 command_runner.go:130] > # 	"image_pulls_by_name",
	I0927 01:10:52.291731   51589 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0927 01:10:52.291738   51589 command_runner.go:130] > # 	"image_pulls_failures",
	I0927 01:10:52.291742   51589 command_runner.go:130] > # 	"image_pulls_successes",
	I0927 01:10:52.291752   51589 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0927 01:10:52.291761   51589 command_runner.go:130] > # 	"image_layer_reuse",
	I0927 01:10:52.291769   51589 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0927 01:10:52.291779   51589 command_runner.go:130] > # 	"containers_oom_total",
	I0927 01:10:52.291786   51589 command_runner.go:130] > # 	"containers_oom",
	I0927 01:10:52.291795   51589 command_runner.go:130] > # 	"processes_defunct",
	I0927 01:10:52.291803   51589 command_runner.go:130] > # 	"operations_total",
	I0927 01:10:52.291812   51589 command_runner.go:130] > # 	"operations_latency_seconds",
	I0927 01:10:52.291819   51589 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0927 01:10:52.291827   51589 command_runner.go:130] > # 	"operations_errors_total",
	I0927 01:10:52.291832   51589 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0927 01:10:52.291840   51589 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0927 01:10:52.291847   51589 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0927 01:10:52.291858   51589 command_runner.go:130] > # 	"image_pulls_success_total",
	I0927 01:10:52.291866   51589 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0927 01:10:52.291876   51589 command_runner.go:130] > # 	"containers_oom_count_total",
	I0927 01:10:52.291884   51589 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0927 01:10:52.291895   51589 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0927 01:10:52.291902   51589 command_runner.go:130] > # ]
	I0927 01:10:52.291911   51589 command_runner.go:130] > # The port on which the metrics server will listen.
	I0927 01:10:52.291920   51589 command_runner.go:130] > # metrics_port = 9090
	I0927 01:10:52.291928   51589 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0927 01:10:52.291936   51589 command_runner.go:130] > # metrics_socket = ""
	I0927 01:10:52.291943   51589 command_runner.go:130] > # The certificate for the secure metrics server.
	I0927 01:10:52.291955   51589 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0927 01:10:52.291968   51589 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0927 01:10:52.291978   51589 command_runner.go:130] > # certificate on any modification event.
	I0927 01:10:52.291987   51589 command_runner.go:130] > # metrics_cert = ""
	I0927 01:10:52.291997   51589 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0927 01:10:52.292008   51589 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0927 01:10:52.292017   51589 command_runner.go:130] > # metrics_key = ""
	I0927 01:10:52.292026   51589 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0927 01:10:52.292031   51589 command_runner.go:130] > [crio.tracing]
	I0927 01:10:52.292043   51589 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0927 01:10:52.292053   51589 command_runner.go:130] > # enable_tracing = false
	I0927 01:10:52.292061   51589 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0927 01:10:52.292071   51589 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0927 01:10:52.292085   51589 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0927 01:10:52.292094   51589 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0927 01:10:52.292104   51589 command_runner.go:130] > # CRI-O NRI configuration.
	I0927 01:10:52.292113   51589 command_runner.go:130] > [crio.nri]
	I0927 01:10:52.292120   51589 command_runner.go:130] > # Globally enable or disable NRI.
	I0927 01:10:52.292124   51589 command_runner.go:130] > # enable_nri = false
	I0927 01:10:52.292132   51589 command_runner.go:130] > # NRI socket to listen on.
	I0927 01:10:52.292143   51589 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0927 01:10:52.292153   51589 command_runner.go:130] > # NRI plugin directory to use.
	I0927 01:10:52.292161   51589 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0927 01:10:52.292172   51589 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0927 01:10:52.292183   51589 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0927 01:10:52.292194   51589 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0927 01:10:52.292203   51589 command_runner.go:130] > # nri_disable_connections = false
	I0927 01:10:52.292214   51589 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0927 01:10:52.292223   51589 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0927 01:10:52.292231   51589 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0927 01:10:52.292242   51589 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0927 01:10:52.292256   51589 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0927 01:10:52.292265   51589 command_runner.go:130] > [crio.stats]
	I0927 01:10:52.292278   51589 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0927 01:10:52.292293   51589 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0927 01:10:52.292303   51589 command_runner.go:130] > # stats_collection_period = 0
	I0927 01:10:52.292330   51589 command_runner.go:130] ! time="2024-09-27 01:10:52.249317077Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0927 01:10:52.292350   51589 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0927 01:10:52.292435   51589 cni.go:84] Creating CNI manager for ""
	I0927 01:10:52.292450   51589 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0927 01:10:52.292460   51589 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:10:52.292489   51589 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-833343 NodeName:multinode-833343 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:10:52.292645   51589 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-833343"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:10:52.292712   51589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:10:52.302664   51589 command_runner.go:130] > kubeadm
	I0927 01:10:52.302685   51589 command_runner.go:130] > kubectl
	I0927 01:10:52.302693   51589 command_runner.go:130] > kubelet
	I0927 01:10:52.302712   51589 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:10:52.302761   51589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:10:52.311906   51589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0927 01:10:52.328741   51589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:10:52.345113   51589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0927 01:10:52.362036   51589 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0927 01:10:52.365903   51589 command_runner.go:130] > 192.168.39.203	control-plane.minikube.internal
	I0927 01:10:52.366127   51589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:10:52.502117   51589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:10:52.516551   51589 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343 for IP: 192.168.39.203
	I0927 01:10:52.516574   51589 certs.go:194] generating shared ca certs ...
	I0927 01:10:52.516593   51589 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:10:52.516735   51589 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:10:52.516787   51589 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:10:52.516799   51589 certs.go:256] generating profile certs ...
	I0927 01:10:52.516894   51589 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/client.key
	I0927 01:10:52.516981   51589 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.key.9a165d03
	I0927 01:10:52.517026   51589 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.key
	I0927 01:10:52.517042   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 01:10:52.517062   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 01:10:52.517079   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 01:10:52.517096   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 01:10:52.517113   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 01:10:52.517146   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 01:10:52.517164   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 01:10:52.517178   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 01:10:52.517244   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:10:52.517288   51589 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:10:52.517301   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:10:52.517335   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:10:52.517367   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:10:52.517398   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:10:52.517453   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:10:52.517490   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.517516   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.517534   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.518144   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:10:52.543500   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:10:52.569028   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:10:52.593566   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:10:52.618361   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:10:52.642366   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:10:52.668271   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:10:52.692436   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:10:52.717822   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:10:52.742674   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:10:52.767220   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:10:52.792283   51589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:10:52.808906   51589 ssh_runner.go:195] Run: openssl version
	I0927 01:10:52.814744   51589 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0927 01:10:52.814813   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:10:52.825656   51589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.830091   51589 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.830186   51589 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.830236   51589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.835608   51589 command_runner.go:130] > 3ec20f2e
	I0927 01:10:52.835830   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:10:52.844943   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:10:52.855761   51589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.860340   51589 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.860375   51589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.860428   51589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.866130   51589 command_runner.go:130] > b5213941
	I0927 01:10:52.866259   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:10:52.875803   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:10:52.886686   51589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.891042   51589 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.891135   51589 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.891169   51589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.896630   51589 command_runner.go:130] > 51391683
	I0927 01:10:52.896803   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:10:52.905897   51589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:10:52.910324   51589 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:10:52.910347   51589 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0927 01:10:52.910355   51589 command_runner.go:130] > Device: 253,1	Inode: 2101800     Links: 1
	I0927 01:10:52.910367   51589 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0927 01:10:52.910376   51589 command_runner.go:130] > Access: 2024-09-27 01:04:01.612690520 +0000
	I0927 01:10:52.910387   51589 command_runner.go:130] > Modify: 2024-09-27 01:04:01.612690520 +0000
	I0927 01:10:52.910395   51589 command_runner.go:130] > Change: 2024-09-27 01:04:01.612690520 +0000
	I0927 01:10:52.910406   51589 command_runner.go:130] >  Birth: 2024-09-27 01:04:01.612690520 +0000
	I0927 01:10:52.910467   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:10:52.916133   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.916200   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:10:52.921767   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.921951   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:10:52.927572   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.927828   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:10:52.933455   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.933661   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:10:52.939375   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.939436   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:10:52.945102   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.945159   51589 kubeadm.go:392] StartCluster: {Name:multinode-833343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-833343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.88 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:10:52.945269   51589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:10:52.945328   51589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:10:52.981948   51589 command_runner.go:130] > 3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7
	I0927 01:10:52.981969   51589 command_runner.go:130] > 02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0
	I0927 01:10:52.981975   51589 command_runner.go:130] > 9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06
	I0927 01:10:52.981983   51589 command_runner.go:130] > 51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe
	I0927 01:10:52.981990   51589 command_runner.go:130] > e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf
	I0927 01:10:52.981995   51589 command_runner.go:130] > 15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8
	I0927 01:10:52.982000   51589 command_runner.go:130] > a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e
	I0927 01:10:52.982007   51589 command_runner.go:130] > 0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1
	I0927 01:10:52.983491   51589 cri.go:89] found id: "3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7"
	I0927 01:10:52.983506   51589 cri.go:89] found id: "02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0"
	I0927 01:10:52.983510   51589 cri.go:89] found id: "9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06"
	I0927 01:10:52.983513   51589 cri.go:89] found id: "51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe"
	I0927 01:10:52.983516   51589 cri.go:89] found id: "e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf"
	I0927 01:10:52.983519   51589 cri.go:89] found id: "15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8"
	I0927 01:10:52.983522   51589 cri.go:89] found id: "a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e"
	I0927 01:10:52.983524   51589 cri.go:89] found id: "0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1"
	I0927 01:10:52.983527   51589 cri.go:89] found id: ""
	I0927 01:10:52.983564   51589 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.807653595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399556807633220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1117aa2-61fa-4cbd-8260-ff6d5f644cd4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.808127025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb7ee5a4-64fb-4fd3-a523-6899920cf0d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.808206955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb7ee5a4-64fb-4fd3-a523-6899920cf0d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.808710153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33b19bbc348c651a15977ac698195dcfd69096687843e4d5b8273a5279639f7f,PodSandboxId:745e3599198c2331e834fbb32a2305c0cbbede8f443bbc1c9fc60b7e74d32a15,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727399493536289086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1,PodSandboxId:11b08d025386ef2257bdf36d6d621817ab3bc0c25bfc28c03442e5ae0efce54c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727399460120549553,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5,PodSandboxId:fcb963144f5921c3632172f66fdad7d3022c2ed6dba06e15b6313db1572dacb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727399460030293837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02d26889f335cc07145daf774288e41017072373a2ad2e1799e901df2c82fcf,PodSandboxId:13bf0d3baebf284ab4b93605349470c86fba288f521ecd9883c40f487d6f7b33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727399459869060310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6,PodSandboxId:6a1e90b4cf8b9c3f9eb60a331b1f14d7026cef1bff75bf9443781b2ddd99bee7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727399459791291283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8-0c929d89d590,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3,PodSandboxId:503fa1d74e0911b18f909e527708d2ebca4eb6cbd437b5a9456e3d5701dd9cea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727399455046471842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278,PodSandboxId:b531bae36a1bf8578a8bc5d7206ca96587b484af6b938aaace3e1d42a2347547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727399455015600489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7,PodSandboxId:4d647e0feb8091518237b7fbe48b78a21b0cd13a5b1a7cf1069f1b7b60b3db0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727399454993265849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038,PodSandboxId:f94e47bc342395a3efff38cdee5be15737927cb3ffa2d53bfbeb869e4f9570c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727399454900199216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77f554f46627203d94de73d5fdd23e95e65d9575aa1bd5519baff9e6ce63163,PodSandboxId:9cf826370b3a62bfb1f2e720365061e000e6656ef7f80ea3d1161d74b3122f08,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727399129173910196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7,PodSandboxId:933c6a5d0e0388d49e25f3cf62abf4ef43afa20d2285fb62668d93107c6f6faa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727399069733351203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0,PodSandboxId:308a304457bf517293ceab47f4bd7001ee4f43b6d659149425018a1b18310ec4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727399069660452995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06,PodSandboxId:85632f08d6b3c9b74bb27d1fc77410097bd5b34f4b1fee03ba4b4c3f91c8470d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727399057687956401,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe,PodSandboxId:26219f2ebd932a110874989db718a664ea2ca1866409268e37559be013de7263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727399057312381457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8
-0c929d89d590,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8,PodSandboxId:3fea8ffb6c67f72050256db27271f91a7e3b52acbd079953c21bc1a466c73b32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727399046126300072,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf,PodSandboxId:4edf991954d2ab25760e2e4de26cd96ec31b943311304a7809681f2e843a0e5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727399046146062363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e,PodSandboxId:68566156da1f6ed4467890f036b59837188299cc35020976640e2ae229a6aa72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727399046059128947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1,PodSandboxId:aa8bd46889272035697cb27cebe4c5613b2321e8f1ea5a12167c9b73cab70d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727399046013425347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb7ee5a4-64fb-4fd3-a523-6899920cf0d0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.851328648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9eda9405-5db5-4564-820b-e2a10f694f77 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.851420483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9eda9405-5db5-4564-820b-e2a10f694f77 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.852304055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c003f290-eb69-4753-bce5-6b05b241ec5e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.852681405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399556852661651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c003f290-eb69-4753-bce5-6b05b241ec5e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.853227229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b967edde-b202-4965-a0d6-3f562f9cec6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.853295303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b967edde-b202-4965-a0d6-3f562f9cec6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.853658837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33b19bbc348c651a15977ac698195dcfd69096687843e4d5b8273a5279639f7f,PodSandboxId:745e3599198c2331e834fbb32a2305c0cbbede8f443bbc1c9fc60b7e74d32a15,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727399493536289086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1,PodSandboxId:11b08d025386ef2257bdf36d6d621817ab3bc0c25bfc28c03442e5ae0efce54c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727399460120549553,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5,PodSandboxId:fcb963144f5921c3632172f66fdad7d3022c2ed6dba06e15b6313db1572dacb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727399460030293837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02d26889f335cc07145daf774288e41017072373a2ad2e1799e901df2c82fcf,PodSandboxId:13bf0d3baebf284ab4b93605349470c86fba288f521ecd9883c40f487d6f7b33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727399459869060310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6,PodSandboxId:6a1e90b4cf8b9c3f9eb60a331b1f14d7026cef1bff75bf9443781b2ddd99bee7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727399459791291283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8-0c929d89d590,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3,PodSandboxId:503fa1d74e0911b18f909e527708d2ebca4eb6cbd437b5a9456e3d5701dd9cea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727399455046471842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278,PodSandboxId:b531bae36a1bf8578a8bc5d7206ca96587b484af6b938aaace3e1d42a2347547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727399455015600489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7,PodSandboxId:4d647e0feb8091518237b7fbe48b78a21b0cd13a5b1a7cf1069f1b7b60b3db0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727399454993265849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038,PodSandboxId:f94e47bc342395a3efff38cdee5be15737927cb3ffa2d53bfbeb869e4f9570c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727399454900199216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77f554f46627203d94de73d5fdd23e95e65d9575aa1bd5519baff9e6ce63163,PodSandboxId:9cf826370b3a62bfb1f2e720365061e000e6656ef7f80ea3d1161d74b3122f08,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727399129173910196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7,PodSandboxId:933c6a5d0e0388d49e25f3cf62abf4ef43afa20d2285fb62668d93107c6f6faa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727399069733351203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0,PodSandboxId:308a304457bf517293ceab47f4bd7001ee4f43b6d659149425018a1b18310ec4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727399069660452995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06,PodSandboxId:85632f08d6b3c9b74bb27d1fc77410097bd5b34f4b1fee03ba4b4c3f91c8470d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727399057687956401,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe,PodSandboxId:26219f2ebd932a110874989db718a664ea2ca1866409268e37559be013de7263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727399057312381457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8
-0c929d89d590,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8,PodSandboxId:3fea8ffb6c67f72050256db27271f91a7e3b52acbd079953c21bc1a466c73b32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727399046126300072,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf,PodSandboxId:4edf991954d2ab25760e2e4de26cd96ec31b943311304a7809681f2e843a0e5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727399046146062363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e,PodSandboxId:68566156da1f6ed4467890f036b59837188299cc35020976640e2ae229a6aa72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727399046059128947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1,PodSandboxId:aa8bd46889272035697cb27cebe4c5613b2321e8f1ea5a12167c9b73cab70d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727399046013425347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b967edde-b202-4965-a0d6-3f562f9cec6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.895107533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67eb9c42-d9d8-4954-b0d2-b4f534d8d252 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.895198008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67eb9c42-d9d8-4954-b0d2-b4f534d8d252 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.896405516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cf7eeba-0147-4e56-98b4-7bb42ec391f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.896858717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399556896837358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cf7eeba-0147-4e56-98b4-7bb42ec391f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.897589826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=514764bd-2b58-418d-b3c4-9edb4c45df95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.897656903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=514764bd-2b58-418d-b3c4-9edb4c45df95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.898029089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33b19bbc348c651a15977ac698195dcfd69096687843e4d5b8273a5279639f7f,PodSandboxId:745e3599198c2331e834fbb32a2305c0cbbede8f443bbc1c9fc60b7e74d32a15,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727399493536289086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1,PodSandboxId:11b08d025386ef2257bdf36d6d621817ab3bc0c25bfc28c03442e5ae0efce54c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727399460120549553,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5,PodSandboxId:fcb963144f5921c3632172f66fdad7d3022c2ed6dba06e15b6313db1572dacb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727399460030293837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02d26889f335cc07145daf774288e41017072373a2ad2e1799e901df2c82fcf,PodSandboxId:13bf0d3baebf284ab4b93605349470c86fba288f521ecd9883c40f487d6f7b33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727399459869060310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6,PodSandboxId:6a1e90b4cf8b9c3f9eb60a331b1f14d7026cef1bff75bf9443781b2ddd99bee7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727399459791291283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8-0c929d89d590,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3,PodSandboxId:503fa1d74e0911b18f909e527708d2ebca4eb6cbd437b5a9456e3d5701dd9cea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727399455046471842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278,PodSandboxId:b531bae36a1bf8578a8bc5d7206ca96587b484af6b938aaace3e1d42a2347547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727399455015600489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7,PodSandboxId:4d647e0feb8091518237b7fbe48b78a21b0cd13a5b1a7cf1069f1b7b60b3db0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727399454993265849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038,PodSandboxId:f94e47bc342395a3efff38cdee5be15737927cb3ffa2d53bfbeb869e4f9570c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727399454900199216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77f554f46627203d94de73d5fdd23e95e65d9575aa1bd5519baff9e6ce63163,PodSandboxId:9cf826370b3a62bfb1f2e720365061e000e6656ef7f80ea3d1161d74b3122f08,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727399129173910196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7,PodSandboxId:933c6a5d0e0388d49e25f3cf62abf4ef43afa20d2285fb62668d93107c6f6faa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727399069733351203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0,PodSandboxId:308a304457bf517293ceab47f4bd7001ee4f43b6d659149425018a1b18310ec4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727399069660452995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06,PodSandboxId:85632f08d6b3c9b74bb27d1fc77410097bd5b34f4b1fee03ba4b4c3f91c8470d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727399057687956401,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe,PodSandboxId:26219f2ebd932a110874989db718a664ea2ca1866409268e37559be013de7263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727399057312381457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8
-0c929d89d590,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8,PodSandboxId:3fea8ffb6c67f72050256db27271f91a7e3b52acbd079953c21bc1a466c73b32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727399046126300072,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf,PodSandboxId:4edf991954d2ab25760e2e4de26cd96ec31b943311304a7809681f2e843a0e5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727399046146062363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e,PodSandboxId:68566156da1f6ed4467890f036b59837188299cc35020976640e2ae229a6aa72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727399046059128947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1,PodSandboxId:aa8bd46889272035697cb27cebe4c5613b2321e8f1ea5a12167c9b73cab70d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727399046013425347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=514764bd-2b58-418d-b3c4-9edb4c45df95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.940331843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35ab36a2-6a79-4756-a3d4-0b8aa1630d14 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.940423647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35ab36a2-6a79-4756-a3d4-0b8aa1630d14 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.941901726Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbfaed7b-50aa-4b80-a3e3-ef2bfe0f7807 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.942339868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399556942313972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbfaed7b-50aa-4b80-a3e3-ef2bfe0f7807 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.942934933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfa1a22a-7a27-434b-9c75-e2bfca4f9a55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.943007752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfa1a22a-7a27-434b-9c75-e2bfca4f9a55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:12:36 multinode-833343 crio[2712]: time="2024-09-27 01:12:36.943334184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33b19bbc348c651a15977ac698195dcfd69096687843e4d5b8273a5279639f7f,PodSandboxId:745e3599198c2331e834fbb32a2305c0cbbede8f443bbc1c9fc60b7e74d32a15,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727399493536289086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1,PodSandboxId:11b08d025386ef2257bdf36d6d621817ab3bc0c25bfc28c03442e5ae0efce54c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727399460120549553,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5,PodSandboxId:fcb963144f5921c3632172f66fdad7d3022c2ed6dba06e15b6313db1572dacb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727399460030293837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02d26889f335cc07145daf774288e41017072373a2ad2e1799e901df2c82fcf,PodSandboxId:13bf0d3baebf284ab4b93605349470c86fba288f521ecd9883c40f487d6f7b33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727399459869060310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6,PodSandboxId:6a1e90b4cf8b9c3f9eb60a331b1f14d7026cef1bff75bf9443781b2ddd99bee7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727399459791291283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8-0c929d89d590,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3,PodSandboxId:503fa1d74e0911b18f909e527708d2ebca4eb6cbd437b5a9456e3d5701dd9cea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727399455046471842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278,PodSandboxId:b531bae36a1bf8578a8bc5d7206ca96587b484af6b938aaace3e1d42a2347547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727399455015600489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7,PodSandboxId:4d647e0feb8091518237b7fbe48b78a21b0cd13a5b1a7cf1069f1b7b60b3db0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727399454993265849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038,PodSandboxId:f94e47bc342395a3efff38cdee5be15737927cb3ffa2d53bfbeb869e4f9570c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727399454900199216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77f554f46627203d94de73d5fdd23e95e65d9575aa1bd5519baff9e6ce63163,PodSandboxId:9cf826370b3a62bfb1f2e720365061e000e6656ef7f80ea3d1161d74b3122f08,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727399129173910196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7,PodSandboxId:933c6a5d0e0388d49e25f3cf62abf4ef43afa20d2285fb62668d93107c6f6faa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727399069733351203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0,PodSandboxId:308a304457bf517293ceab47f4bd7001ee4f43b6d659149425018a1b18310ec4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727399069660452995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06,PodSandboxId:85632f08d6b3c9b74bb27d1fc77410097bd5b34f4b1fee03ba4b4c3f91c8470d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727399057687956401,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe,PodSandboxId:26219f2ebd932a110874989db718a664ea2ca1866409268e37559be013de7263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727399057312381457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8
-0c929d89d590,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8,PodSandboxId:3fea8ffb6c67f72050256db27271f91a7e3b52acbd079953c21bc1a466c73b32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727399046126300072,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf,PodSandboxId:4edf991954d2ab25760e2e4de26cd96ec31b943311304a7809681f2e843a0e5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727399046146062363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e,PodSandboxId:68566156da1f6ed4467890f036b59837188299cc35020976640e2ae229a6aa72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727399046059128947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1,PodSandboxId:aa8bd46889272035697cb27cebe4c5613b2321e8f1ea5a12167c9b73cab70d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727399046013425347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfa1a22a-7a27-434b-9c75-e2bfca4f9a55 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	33b19bbc348c6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   745e3599198c2       busybox-7dff88458-cv7gx
	1dcd36b671b63       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   11b08d025386e       kindnet-qjx9d
	b9d4cfadfab2b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   fcb963144f592       coredns-7c65d6cfc9-fxjdg
	d02d26889f335       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   13bf0d3baebf2       storage-provisioner
	9a11073b6bcce       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   6a1e90b4cf8b9       kube-proxy-5kxx5
	e03bbbc7bc9d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   503fa1d74e091       etcd-multinode-833343
	972e11adbd7e1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   b531bae36a1bf       kube-controller-manager-multinode-833343
	de2f589dec797       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   4d647e0feb809       kube-apiserver-multinode-833343
	343fd95487e49       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   f94e47bc34239       kube-scheduler-multinode-833343
	b77f554f46627       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   9cf826370b3a6       busybox-7dff88458-cv7gx
	3379d1c82431b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   933c6a5d0e038       coredns-7c65d6cfc9-fxjdg
	02c5e4faf57e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   308a304457bf5       storage-provisioner
	9de6deb0a88fa       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   85632f08d6b3c       kindnet-qjx9d
	51a77d274b9ce       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   26219f2ebd932       kube-proxy-5kxx5
	e8d19f9308bbc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   4edf991954d2a       etcd-multinode-833343
	15018f9c92547       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   3fea8ffb6c67f       kube-scheduler-multinode-833343
	a9182a2399489       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   68566156da1f6       kube-apiserver-multinode-833343
	0a3e4bfb234ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   aa8bd46889272       kube-controller-manager-multinode-833343
	
	
	==> coredns [3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7] <==
	[INFO] 10.244.0.3:44200 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001609341s
	[INFO] 10.244.0.3:46637 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000795s
	[INFO] 10.244.0.3:56671 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057136s
	[INFO] 10.244.0.3:58596 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000941758s
	[INFO] 10.244.0.3:46449 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072682s
	[INFO] 10.244.0.3:33359 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062244s
	[INFO] 10.244.0.3:51304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056318s
	[INFO] 10.244.1.2:41603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185221s
	[INFO] 10.244.1.2:48232 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010224s
	[INFO] 10.244.1.2:51298 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089399s
	[INFO] 10.244.1.2:46911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000903s
	[INFO] 10.244.0.3:57075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096s
	[INFO] 10.244.0.3:41622 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078699s
	[INFO] 10.244.0.3:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000052234s
	[INFO] 10.244.0.3:41626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041631s
	[INFO] 10.244.1.2:39521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144383s
	[INFO] 10.244.1.2:42275 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193497s
	[INFO] 10.244.1.2:50197 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012607s
	[INFO] 10.244.1.2:54228 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095643s
	[INFO] 10.244.0.3:53946 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127789s
	[INFO] 10.244.0.3:33056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109054s
	[INFO] 10.244.0.3:38337 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115039s
	[INFO] 10.244.0.3:41413 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092911s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46722 - 34863 "HINFO IN 4331348365642039683.9201062156548862189. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026118013s
	
	
	==> describe nodes <==
	Name:               multinode-833343
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-833343
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=multinode-833343
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_04_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:04:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-833343
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:12:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:10:58 +0000   Fri, 27 Sep 2024 01:04:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:10:58 +0000   Fri, 27 Sep 2024 01:04:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:10:58 +0000   Fri, 27 Sep 2024 01:04:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:10:58 +0000   Fri, 27 Sep 2024 01:04:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    multinode-833343
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aac41b37ee244db2a333991a2a9f4ee1
	  System UUID:                aac41b37-ee24-4db2-a333-991a2a9f4ee1
	  Boot ID:                    6577b725-ffc5-4292-b709-1f44478ec6e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cv7gx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 coredns-7c65d6cfc9-fxjdg                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m21s
	  kube-system                 etcd-multinode-833343                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m26s
	  kube-system                 kindnet-qjx9d                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m21s
	  kube-system                 kube-apiserver-multinode-833343             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-controller-manager-multinode-833343    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-proxy-5kxx5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-scheduler-multinode-833343             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m19s                kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m26s                kubelet          Node multinode-833343 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m26s                kubelet          Node multinode-833343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s                kubelet          Node multinode-833343 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m26s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m22s                node-controller  Node multinode-833343 event: Registered Node multinode-833343 in Controller
	  Normal  NodeReady                8m8s                 kubelet          Node multinode-833343 status is now: NodeReady
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node multinode-833343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node multinode-833343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node multinode-833343 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                  node-controller  Node multinode-833343 event: Registered Node multinode-833343 in Controller
	
	
	Name:               multinode-833343-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-833343-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=multinode-833343
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T01_11_37_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:11:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-833343-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:12:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:12:07 +0000   Fri, 27 Sep 2024 01:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:12:07 +0000   Fri, 27 Sep 2024 01:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:12:07 +0000   Fri, 27 Sep 2024 01:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:12:07 +0000   Fri, 27 Sep 2024 01:11:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    multinode-833343-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03999963ab6d474d86d48ee8404b9f12
	  System UUID:                03999963-ab6d-474d-86d4-8ee8404b9f12
	  Boot ID:                    b3976b1c-595a-4e7e-b7db-5ec396aed414
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p7vbc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-mvzbh              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m35s
	  kube-system                 kube-proxy-97gll           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m29s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m35s (x2 over 7m35s)  kubelet     Node multinode-833343-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s (x2 over 7m35s)  kubelet     Node multinode-833343-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s (x2 over 7m35s)  kubelet     Node multinode-833343-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m14s                  kubelet     Node multinode-833343-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-833343-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-833343-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-833343-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-833343-m02 status is now: NodeReady
	
	
	Name:               multinode-833343-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-833343-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=multinode-833343
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T01_12_16_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:12:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-833343-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:12:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:12:34 +0000   Fri, 27 Sep 2024 01:12:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:12:34 +0000   Fri, 27 Sep 2024 01:12:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:12:34 +0000   Fri, 27 Sep 2024 01:12:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:12:34 +0000   Fri, 27 Sep 2024 01:12:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-833343-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4270e5e402674bd8a4c0074a1097463a
	  System UUID:                4270e5e4-0267-4bd8-a4c0-074a1097463a
	  Boot ID:                    12403182-3dd1-4c67-bc01-86674fcf7dcd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7r2gm       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m39s
	  kube-system                 kube-proxy-6khw5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m33s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m39s)  kubelet     Node multinode-833343-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m39s)  kubelet     Node multinode-833343-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m39s)  kubelet     Node multinode-833343-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m18s                  kubelet     Node multinode-833343-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet     Node multinode-833343-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet     Node multinode-833343-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet     Node multinode-833343-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m28s                  kubelet     Node multinode-833343-m03 status is now: NodeReady
	  Normal  Starting                 22s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-833343-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-833343-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-833343-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-833343-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060082] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.168791] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.140370] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.280641] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.940663] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[Sep27 01:04] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.062136] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.509098] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.090756] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.582584] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +1.162317] kauditd_printk_skb: 43 callbacks suppressed
	[ +12.305304] kauditd_printk_skb: 38 callbacks suppressed
	[Sep27 01:05] kauditd_printk_skb: 14 callbacks suppressed
	[Sep27 01:10] systemd-fstab-generator[2635]: Ignoring "noauto" option for root device
	[  +0.141473] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.169742] systemd-fstab-generator[2662]: Ignoring "noauto" option for root device
	[  +0.157394] systemd-fstab-generator[2674]: Ignoring "noauto" option for root device
	[  +0.285752] systemd-fstab-generator[2702]: Ignoring "noauto" option for root device
	[  +4.975176] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.085291] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.539849] systemd-fstab-generator[2920]: Ignoring "noauto" option for root device
	[  +5.683796] kauditd_printk_skb: 74 callbacks suppressed
	[Sep27 01:11] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.357912] systemd-fstab-generator[3757]: Ignoring "noauto" option for root device
	[ +21.391013] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3] <==
	{"level":"info","ts":"2024-09-27T01:10:55.718533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 switched to configuration voters=(2944666324747433461)"}
	{"level":"info","ts":"2024-09-27T01:10:55.718614Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","added-peer-id":"28dd8e6bbca035f5","added-peer-peer-urls":["https://192.168.39.203:2380"]}
	{"level":"info","ts":"2024-09-27T01:10:55.718729Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:10:55.718826Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:10:55.725053Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T01:10:55.725335Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T01:10:55.725376Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T01:10:55.725516Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-27T01:10:55.725541Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-27T01:10:56.964378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T01:10:56.964428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T01:10:56.964476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-09-27T01:10:56.964495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T01:10:56.964501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-09-27T01:10:56.964515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T01:10:56.964523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-09-27T01:10:56.970057Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:multinode-833343 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:10:56.970133Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:10:56.970388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:10:56.970866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:10:56.970949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:10:56.971849Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:10:56.972371Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:10:56.972823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	{"level":"info","ts":"2024-09-27T01:10:56.974049Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf] <==
	{"level":"info","ts":"2024-09-27T01:04:07.359364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:04:07.361592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-27T01:05:06.180644Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.349248ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T01:05:06.181531Z","caller":"traceutil/trace.go:171","msg":"trace[1417266234] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:463; }","duration":"255.353084ms","start":"2024-09-27T01:05:05.926170Z","end":"2024-09-27T01:05:06.181523Z","steps":["trace[1417266234] 'range keys from in-memory index tree'  (duration: 239.218259ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T01:05:06.180573Z","caller":"traceutil/trace.go:171","msg":"trace[261896963] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:483; }","duration":"150.197465ms","start":"2024-09-27T01:05:06.030342Z","end":"2024-09-27T01:05:06.180539Z","steps":["trace[261896963] 'read index received'  (duration: 149.994136ms)","trace[261896963] 'applied index is now lower than readState.Index'  (duration: 202.747µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T01:05:06.181475Z","caller":"traceutil/trace.go:171","msg":"trace[1196070712] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"250.300115ms","start":"2024-09-27T01:05:05.931152Z","end":"2024-09-27T01:05:06.181452Z","steps":["trace[1196070712] 'process raft request'  (duration: 249.231928ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T01:05:06.261451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.976739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-09-27T01:05:06.261640Z","caller":"traceutil/trace.go:171","msg":"trace[1794335798] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:464; }","duration":"212.183711ms","start":"2024-09-27T01:05:06.049437Z","end":"2024-09-27T01:05:06.261621Z","steps":["trace[1794335798] 'agreement among raft nodes before linearized reading'  (duration: 211.921091ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T01:05:06.261452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.035468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-833343-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-27T01:05:06.261845Z","caller":"traceutil/trace.go:171","msg":"trace[412796641] range","detail":"{range_begin:/registry/minions/multinode-833343-m02; range_end:; response_count:1; response_revision:464; }","duration":"165.438773ms","start":"2024-09-27T01:05:06.096393Z","end":"2024-09-27T01:05:06.261831Z","steps":["trace[412796641] 'agreement among raft nodes before linearized reading'  (duration: 165.010487ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T01:05:58.742742Z","caller":"traceutil/trace.go:171","msg":"trace[572252345] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:599; }","duration":"163.646673ms","start":"2024-09-27T01:05:58.579071Z","end":"2024-09-27T01:05:58.742718Z","steps":["trace[572252345] 'read index received'  (duration: 86.459301ms)","trace[572252345] 'applied index is now lower than readState.Index'  (duration: 77.186854ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T01:05:58.742917Z","caller":"traceutil/trace.go:171","msg":"trace[1696719071] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"202.617876ms","start":"2024-09-27T01:05:58.540291Z","end":"2024-09-27T01:05:58.742909Z","steps":["trace[1696719071] 'process raft request'  (duration: 125.231042ms)","trace[1696719071] 'compare'  (duration: 77.063264ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T01:05:58.743169Z","caller":"traceutil/trace.go:171","msg":"trace[1870578884] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"200.116959ms","start":"2024-09-27T01:05:58.543045Z","end":"2024-09-27T01:05:58.743162Z","steps":["trace[1870578884] 'process raft request'  (duration: 199.635893ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T01:05:58.743328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.233806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T01:05:58.743376Z","caller":"traceutil/trace.go:171","msg":"trace[1599516607] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:569; }","duration":"164.302101ms","start":"2024-09-27T01:05:58.579066Z","end":"2024-09-27T01:05:58.743368Z","steps":["trace[1599516607] 'agreement among raft nodes before linearized reading'  (duration: 164.220097ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T01:09:15.331531Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-27T01:09:15.331636Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-833343","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	{"level":"warn","ts":"2024-09-27T01:09:15.337072Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:09:15.337183Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:09:15.388176Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:09:15.388254Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T01:09:15.388334Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"28dd8e6bbca035f5","current-leader-member-id":"28dd8e6bbca035f5"}
	{"level":"info","ts":"2024-09-27T01:09:15.390612Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-27T01:09:15.390719Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-27T01:09:15.390747Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-833343","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	
	
	==> kernel <==
	 01:12:37 up 9 min,  0 users,  load average: 0.22, 0.15, 0.09
	Linux multinode-833343 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1] <==
	I0927 01:11:51.165003       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:12:01.162366       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:12:01.162421       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:12:01.162573       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:12:01.162580       1 main.go:299] handling current node
	I0927 01:12:01.162590       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:12:01.162595       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:12:11.162890       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:12:11.163029       1 main.go:299] handling current node
	I0927 01:12:11.163067       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:12:11.163096       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:12:11.163272       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:12:11.163312       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:12:21.161581       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:12:21.161737       1 main.go:299] handling current node
	I0927 01:12:21.161849       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:12:21.161875       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:12:21.162051       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:12:21.162076       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.2.0/24] 
	I0927 01:12:31.166701       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:12:31.166935       1 main.go:299] handling current node
	I0927 01:12:31.167050       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:12:31.167078       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:12:31.167463       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:12:31.167544       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06] <==
	I0927 01:08:28.663281       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:08:38.654361       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:08:38.654503       1 main.go:299] handling current node
	I0927 01:08:38.654533       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:08:38.654551       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:08:38.654686       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:08:38.654706       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:08:48.658862       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:08:48.658910       1 main.go:299] handling current node
	I0927 01:08:48.658942       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:08:48.658948       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:08:48.659080       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:08:48.659104       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:08:58.656818       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:08:58.656930       1 main.go:299] handling current node
	I0927 01:08:58.656963       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:08:58.656971       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:08:58.657179       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:08:58.657213       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:09:08.654310       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:09:08.654444       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:09:08.654632       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:09:08.654660       1 main.go:299] handling current node
	I0927 01:09:08.654801       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:09:08.654828       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e] <==
	W0927 01:09:15.354815       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.354875       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.354989       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355132       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355190       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355323       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355417       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355525       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0927 01:09:15.355668       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0927 01:09:15.355880       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355992       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356090       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356148       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356280       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356378       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356486       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356621       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356733       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0927 01:09:15.356870       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0927 01:09:15.357254       1 controller.go:157] Shutting down quota evaluator
	I0927 01:09:15.357283       1 controller.go:176] quota evaluator worker shutdown
	I0927 01:09:15.357709       1 controller.go:176] quota evaluator worker shutdown
	I0927 01:09:15.357742       1 controller.go:176] quota evaluator worker shutdown
	I0927 01:09:15.358989       1 controller.go:176] quota evaluator worker shutdown
	I0927 01:09:15.359029       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7] <==
	I0927 01:10:58.354899       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 01:10:58.355085       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 01:10:58.366076       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 01:10:58.368457       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 01:10:58.368639       1 aggregator.go:171] initial CRD sync complete...
	I0927 01:10:58.368672       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 01:10:58.368695       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 01:10:58.374161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 01:10:58.409256       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 01:10:58.418809       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 01:10:58.423109       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 01:10:58.423150       1 policy_source.go:224] refreshing policies
	E0927 01:10:58.423706       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0927 01:10:58.456138       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 01:10:58.456913       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 01:10:58.469503       1 cache.go:39] Caches are synced for autoregister controller
	I0927 01:10:58.469830       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 01:10:59.269484       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 01:11:00.785429       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 01:11:00.909039       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 01:11:00.919097       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 01:11:01.011659       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 01:11:01.019825       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 01:11:01.779136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 01:11:02.024579       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1] <==
	I0927 01:06:48.622568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:48.623048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:06:49.979172       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-833343-m03\" does not exist"
	I0927 01:06:49.979252       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:06:49.986176       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-833343-m03" podCIDRs=["10.244.3.0/24"]
	I0927 01:06:49.986221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:49.986484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:49.997720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:50.159554       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:50.512715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:50.986434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:00.154482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:09.428533       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:07:09.428687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:09.437840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:10.955085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:45.971173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:07:45.972210       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m03"
	I0927 01:07:45.988481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:07:46.019127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.319929ms"
	I0927 01:07:46.019256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.435µs"
	I0927 01:07:51.022623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:51.048287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:51.089507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:08:01.168060       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	
	
	==> kube-controller-manager [972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278] <==
	I0927 01:11:56.358738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:11:56.369842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:11:56.379657       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.675µs"
	I0927 01:11:56.393927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.395µs"
	I0927 01:11:56.725520       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:11:59.949413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.183225ms"
	I0927 01:11:59.949500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.733µs"
	I0927 01:12:07.295817       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:12:13.997011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:14.017911       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:14.248378       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:14.248469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:12:15.491793       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-833343-m03\" does not exist"
	I0927 01:12:15.493680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:12:15.511931       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-833343-m03" podCIDRs=["10.244.2.0/24"]
	I0927 01:12:15.512148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:15.512266       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:15.778704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:16.114716       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:16.763621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:25.523414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:34.079978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:34.080284       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:12:34.093545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:36.745336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	
	
	==> kube-proxy [51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:04:17.487929       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:04:17.496838       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0927 01:04:17.496990       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:04:17.531212       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:04:17.531263       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:04:17.531287       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:04:17.533715       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:04:17.534070       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:04:17.534099       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:04:17.535621       1 config.go:199] "Starting service config controller"
	I0927 01:04:17.535661       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:04:17.535693       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:04:17.535696       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:04:17.536291       1 config.go:328] "Starting node config controller"
	I0927 01:04:17.536319       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:04:17.636159       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:04:17.636176       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:04:17.636488       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:11:00.191870       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:11:00.245266       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0927 01:11:00.246014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:11:00.415823       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:11:00.415871       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:11:00.415928       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:11:00.422469       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:11:00.422729       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:11:00.422740       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:11:00.428745       1 config.go:199] "Starting service config controller"
	I0927 01:11:00.443848       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:11:00.443365       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:11:00.443920       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:11:00.444263       1 config.go:328] "Starting node config controller"
	I0927 01:11:00.444287       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:11:00.544827       1 shared_informer.go:320] Caches are synced for node config
	I0927 01:11:00.544875       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:11:00.544914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8] <==
	W0927 01:04:08.935544       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:04:08.935718       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0927 01:04:08.935740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.738277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 01:04:09.738387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.767255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:04:09.767360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.869364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 01:04:09.869488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.870702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:04:09.870746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.976735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:04:09.976848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.031665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 01:04:10.031718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.114879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 01:04:10.114939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.228979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 01:04:10.229029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.235543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 01:04:10.235600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.460293       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:04:10.460388       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0927 01:04:13.328886       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 01:09:15.330270       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038] <==
	I0927 01:10:56.223424       1 serving.go:386] Generated self-signed cert in-memory
	W0927 01:10:58.323187       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 01:10:58.323233       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 01:10:58.323245       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 01:10:58.323256       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 01:10:58.421874       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 01:10:58.422440       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:10:58.430312       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 01:10:58.430539       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 01:10:58.430599       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:10:58.430640       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 01:10:58.535079       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 01:11:04 multinode-833343 kubelet[2927]: E0927 01:11:04.306430    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399464306220725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:05 multinode-833343 kubelet[2927]: I0927 01:11:05.685003    2927 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 27 01:11:14 multinode-833343 kubelet[2927]: E0927 01:11:14.308554    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399474308090806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:14 multinode-833343 kubelet[2927]: E0927 01:11:14.309831    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399474308090806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:24 multinode-833343 kubelet[2927]: E0927 01:11:24.313898    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399484313495405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:24 multinode-833343 kubelet[2927]: E0927 01:11:24.314300    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399484313495405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:34 multinode-833343 kubelet[2927]: E0927 01:11:34.316315    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399494315806920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:34 multinode-833343 kubelet[2927]: E0927 01:11:34.316360    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399494315806920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:44 multinode-833343 kubelet[2927]: E0927 01:11:44.321169    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399504320572690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:44 multinode-833343 kubelet[2927]: E0927 01:11:44.321532    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399504320572690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:54 multinode-833343 kubelet[2927]: E0927 01:11:54.324180    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399514323818629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:54 multinode-833343 kubelet[2927]: E0927 01:11:54.324221    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399514323818629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:11:54 multinode-833343 kubelet[2927]: E0927 01:11:54.338864    2927 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:11:54 multinode-833343 kubelet[2927]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:11:54 multinode-833343 kubelet[2927]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:11:54 multinode-833343 kubelet[2927]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:11:54 multinode-833343 kubelet[2927]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:12:04 multinode-833343 kubelet[2927]: E0927 01:12:04.327333    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399524326434555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:12:04 multinode-833343 kubelet[2927]: E0927 01:12:04.327381    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399524326434555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:12:14 multinode-833343 kubelet[2927]: E0927 01:12:14.329214    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399534328524771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:12:14 multinode-833343 kubelet[2927]: E0927 01:12:14.329239    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399534328524771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:12:24 multinode-833343 kubelet[2927]: E0927 01:12:24.330947    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399544330370620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:12:24 multinode-833343 kubelet[2927]: E0927 01:12:24.330971    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399544330370620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:12:34 multinode-833343 kubelet[2927]: E0927 01:12:34.332238    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399554331944811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:12:34 multinode-833343 kubelet[2927]: E0927 01:12:34.332261    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399554331944811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:12:36.531799   52704 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19711-14935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-833343 -n multinode-833343
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-833343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 stop
E0927 01:13:01.244666   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:13:13.555539   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-833343 stop: exit status 82 (2m0.466656818s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-833343-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-833343 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-833343 status: (18.805473567s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr: (3.390838645s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-833343 -n multinode-833343
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-833343 logs -n 25: (1.489855738s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m02:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343:/home/docker/cp-test_multinode-833343-m02_multinode-833343.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n multinode-833343 sudo cat                                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /home/docker/cp-test_multinode-833343-m02_multinode-833343.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m02:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03:/home/docker/cp-test_multinode-833343-m02_multinode-833343-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n multinode-833343-m03 sudo cat                                   | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /home/docker/cp-test_multinode-833343-m02_multinode-833343-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp testdata/cp-test.txt                                                | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3824164229/001/cp-test_multinode-833343-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343:/home/docker/cp-test_multinode-833343-m03_multinode-833343.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n multinode-833343 sudo cat                                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /home/docker/cp-test_multinode-833343-m03_multinode-833343.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt                       | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m02:/home/docker/cp-test_multinode-833343-m03_multinode-833343-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n                                                                 | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | multinode-833343-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-833343 ssh -n multinode-833343-m02 sudo cat                                   | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	|         | /home/docker/cp-test_multinode-833343-m03_multinode-833343-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-833343 node stop m03                                                          | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:06 UTC |
	| node    | multinode-833343 node start                                                             | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:06 UTC | 27 Sep 24 01:07 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-833343                                                                | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:07 UTC |                     |
	| stop    | -p multinode-833343                                                                     | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:07 UTC |                     |
	| start   | -p multinode-833343                                                                     | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:09 UTC | 27 Sep 24 01:12 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-833343                                                                | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:12 UTC |                     |
	| node    | multinode-833343 node delete                                                            | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:12 UTC | 27 Sep 24 01:12 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-833343 stop                                                                   | multinode-833343 | jenkins | v1.34.0 | 27 Sep 24 01:12 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:09:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:09:14.103789   51589 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:09:14.104044   51589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:09:14.104052   51589 out.go:358] Setting ErrFile to fd 2...
	I0927 01:09:14.104057   51589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:09:14.104225   51589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:09:14.104738   51589 out.go:352] Setting JSON to false
	I0927 01:09:14.105666   51589 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6699,"bootTime":1727392655,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:09:14.105759   51589 start.go:139] virtualization: kvm guest
	I0927 01:09:14.107888   51589 out.go:177] * [multinode-833343] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:09:14.109198   51589 notify.go:220] Checking for updates...
	I0927 01:09:14.109221   51589 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:09:14.110695   51589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:09:14.112076   51589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:09:14.113399   51589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:09:14.114658   51589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:09:14.116092   51589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:09:14.117709   51589 config.go:182] Loaded profile config "multinode-833343": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:09:14.117799   51589 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:09:14.118269   51589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:09:14.118304   51589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:09:14.133432   51589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36475
	I0927 01:09:14.133889   51589 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:09:14.134397   51589 main.go:141] libmachine: Using API Version  1
	I0927 01:09:14.134437   51589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:09:14.134771   51589 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:09:14.134916   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:09:14.169608   51589 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:09:14.170906   51589 start.go:297] selected driver: kvm2
	I0927 01:09:14.170918   51589 start.go:901] validating driver "kvm2" against &{Name:multinode-833343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-833343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.88 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:09:14.171041   51589 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:09:14.171388   51589 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:09:14.171461   51589 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:09:14.186412   51589 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:09:14.187093   51589 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:09:14.187138   51589 cni.go:84] Creating CNI manager for ""
	I0927 01:09:14.187200   51589 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0927 01:09:14.187276   51589 start.go:340] cluster config:
	{Name:multinode-833343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-833343 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.88 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:09:14.187472   51589 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:09:14.189325   51589 out.go:177] * Starting "multinode-833343" primary control-plane node in "multinode-833343" cluster
	I0927 01:09:14.190624   51589 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:09:14.190657   51589 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 01:09:14.190665   51589 cache.go:56] Caching tarball of preloaded images
	I0927 01:09:14.190747   51589 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:09:14.190760   51589 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 01:09:14.190863   51589 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/config.json ...
	I0927 01:09:14.191046   51589 start.go:360] acquireMachinesLock for multinode-833343: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:09:14.191082   51589 start.go:364] duration metric: took 21.238µs to acquireMachinesLock for "multinode-833343"
	I0927 01:09:14.191095   51589 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:09:14.191102   51589 fix.go:54] fixHost starting: 
	I0927 01:09:14.191387   51589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:09:14.191419   51589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:09:14.205487   51589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I0927 01:09:14.205977   51589 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:09:14.206470   51589 main.go:141] libmachine: Using API Version  1
	I0927 01:09:14.206489   51589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:09:14.206775   51589 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:09:14.206943   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:09:14.207094   51589 main.go:141] libmachine: (multinode-833343) Calling .GetState
	I0927 01:09:14.208547   51589 fix.go:112] recreateIfNeeded on multinode-833343: state=Running err=<nil>
	W0927 01:09:14.208565   51589 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:09:14.210515   51589 out.go:177] * Updating the running kvm2 "multinode-833343" VM ...
	I0927 01:09:14.211758   51589 machine.go:93] provisionDockerMachine start ...
	I0927 01:09:14.211774   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:09:14.211940   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:14.214354   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.214794   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.214820   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.214977   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:14.215121   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.215276   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.215412   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:14.215579   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:09:14.215759   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:09:14.215772   51589 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:09:14.316191   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-833343
	
	I0927 01:09:14.316228   51589 main.go:141] libmachine: (multinode-833343) Calling .GetMachineName
	I0927 01:09:14.316479   51589 buildroot.go:166] provisioning hostname "multinode-833343"
	I0927 01:09:14.316506   51589 main.go:141] libmachine: (multinode-833343) Calling .GetMachineName
	I0927 01:09:14.316713   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:14.319579   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.319971   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.319999   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.320134   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:14.320289   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.320406   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.320501   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:14.320643   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:09:14.320803   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:09:14.320815   51589 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-833343 && echo "multinode-833343" | sudo tee /etc/hostname
	I0927 01:09:14.439298   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-833343
	
	I0927 01:09:14.439342   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:14.442448   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.442875   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.442911   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.443053   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:14.443265   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.443499   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:14.443655   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:14.443821   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:09:14.444012   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:09:14.444029   51589 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-833343' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-833343/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-833343' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:09:14.544038   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:09:14.544066   51589 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:09:14.544100   51589 buildroot.go:174] setting up certificates
	I0927 01:09:14.544112   51589 provision.go:84] configureAuth start
	I0927 01:09:14.544129   51589 main.go:141] libmachine: (multinode-833343) Calling .GetMachineName
	I0927 01:09:14.544395   51589 main.go:141] libmachine: (multinode-833343) Calling .GetIP
	I0927 01:09:14.546954   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.547333   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.547370   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.547515   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:14.549535   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.549861   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:14.549892   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:14.550019   51589 provision.go:143] copyHostCerts
	I0927 01:09:14.550047   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:09:14.550082   51589 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:09:14.550093   51589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:09:14.550161   51589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:09:14.550249   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:09:14.550270   51589 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:09:14.550277   51589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:09:14.550301   51589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:09:14.550361   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:09:14.550382   51589 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:09:14.550385   51589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:09:14.550405   51589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:09:14.550467   51589 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.multinode-833343 san=[127.0.0.1 192.168.39.203 localhost minikube multinode-833343]
	I0927 01:09:15.047235   51589 provision.go:177] copyRemoteCerts
	I0927 01:09:15.047300   51589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:09:15.047347   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:15.050166   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:15.050592   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:15.050623   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:15.050796   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:15.050983   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:15.051142   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:15.051334   51589 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:09:15.134876   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 01:09:15.134957   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:09:15.161902   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 01:09:15.161981   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:09:15.188979   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 01:09:15.189048   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0927 01:09:15.216539   51589 provision.go:87] duration metric: took 672.41659ms to configureAuth
	I0927 01:09:15.216565   51589 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:09:15.216794   51589 config.go:182] Loaded profile config "multinode-833343": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:09:15.216863   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:09:15.219355   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:15.219707   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:09:15.219734   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:09:15.219849   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:09:15.220032   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:15.220163   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:09:15.220301   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:09:15.220426   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:09:15.220588   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:09:15.220602   51589 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:10:46.035695   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:10:46.035719   51589 machine.go:96] duration metric: took 1m31.823949403s to provisionDockerMachine
	I0927 01:10:46.035731   51589 start.go:293] postStartSetup for "multinode-833343" (driver="kvm2")
	I0927 01:10:46.035741   51589 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:10:46.035776   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.036051   51589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:10:46.036073   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:10:46.039286   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.039705   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.039736   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.039891   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:10:46.040065   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.040215   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:10:46.040321   51589 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:10:46.122717   51589 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:10:46.126977   51589 command_runner.go:130] > NAME=Buildroot
	I0927 01:10:46.127000   51589 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0927 01:10:46.127006   51589 command_runner.go:130] > ID=buildroot
	I0927 01:10:46.127018   51589 command_runner.go:130] > VERSION_ID=2023.02.9
	I0927 01:10:46.127025   51589 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0927 01:10:46.127051   51589 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:10:46.127064   51589 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:10:46.127132   51589 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:10:46.127216   51589 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:10:46.127228   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /etc/ssl/certs/221382.pem
	I0927 01:10:46.127321   51589 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:10:46.136691   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:10:46.160928   51589 start.go:296] duration metric: took 125.185595ms for postStartSetup
	I0927 01:10:46.160975   51589 fix.go:56] duration metric: took 1m31.969870867s for fixHost
	I0927 01:10:46.161043   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:10:46.163678   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.164050   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.164090   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.164188   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:10:46.164386   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.164541   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.164690   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:10:46.164922   51589 main.go:141] libmachine: Using SSH client type: native
	I0927 01:10:46.165140   51589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0927 01:10:46.165151   51589 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:10:46.268031   51589 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727399446.239841222
	
	I0927 01:10:46.268054   51589 fix.go:216] guest clock: 1727399446.239841222
	I0927 01:10:46.268061   51589 fix.go:229] Guest: 2024-09-27 01:10:46.239841222 +0000 UTC Remote: 2024-09-27 01:10:46.160981439 +0000 UTC m=+92.093833083 (delta=78.859783ms)
	I0927 01:10:46.268105   51589 fix.go:200] guest clock delta is within tolerance: 78.859783ms
	I0927 01:10:46.268112   51589 start.go:83] releasing machines lock for "multinode-833343", held for 1m32.077021339s
	I0927 01:10:46.268198   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.268442   51589 main.go:141] libmachine: (multinode-833343) Calling .GetIP
	I0927 01:10:46.270854   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.271232   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.271266   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.271449   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.271966   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.272133   51589 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:10:46.272212   51589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:10:46.272267   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:10:46.272346   51589 ssh_runner.go:195] Run: cat /version.json
	I0927 01:10:46.272364   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:10:46.274758   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.274898   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.275098   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.275123   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.275258   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:10:46.275387   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:46.275417   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.275423   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:46.275601   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:10:46.275602   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:10:46.275787   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:10:46.275783   51589 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:10:46.275934   51589 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:10:46.276026   51589 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:10:46.356240   51589 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0927 01:10:46.356498   51589 ssh_runner.go:195] Run: systemctl --version
	I0927 01:10:46.382379   51589 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0927 01:10:46.382448   51589 command_runner.go:130] > systemd 252 (252)
	I0927 01:10:46.382483   51589 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0927 01:10:46.382546   51589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:10:46.546015   51589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 01:10:46.551950   51589 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0927 01:10:46.552004   51589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:10:46.552068   51589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:10:46.561646   51589 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 01:10:46.561672   51589 start.go:495] detecting cgroup driver to use...
	I0927 01:10:46.561751   51589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:10:46.578557   51589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:10:46.592689   51589 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:10:46.592757   51589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:10:46.608326   51589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:10:46.622571   51589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:10:46.770861   51589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:10:46.910731   51589 docker.go:233] disabling docker service ...
	I0927 01:10:46.910802   51589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:10:46.929477   51589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:10:46.943801   51589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:10:47.093250   51589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:10:47.238800   51589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:10:47.253368   51589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:10:47.272692   51589 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0927 01:10:47.273188   51589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:10:47.273243   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.284075   51589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:10:47.284126   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.294596   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.305247   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.316646   51589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:10:47.327448   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.337983   51589 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.349380   51589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:10:47.360282   51589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:10:47.370207   51589 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0927 01:10:47.370271   51589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:10:47.380068   51589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:10:47.520848   51589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:10:52.028818   51589 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.507941659s)
	I0927 01:10:52.028843   51589 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:10:52.028894   51589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:10:52.035356   51589 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0927 01:10:52.035393   51589 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0927 01:10:52.035420   51589 command_runner.go:130] > Device: 0,22	Inode: 1311        Links: 1
	I0927 01:10:52.035432   51589 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0927 01:10:52.035441   51589 command_runner.go:130] > Access: 2024-09-27 01:10:51.889971927 +0000
	I0927 01:10:52.035450   51589 command_runner.go:130] > Modify: 2024-09-27 01:10:51.889971927 +0000
	I0927 01:10:52.035455   51589 command_runner.go:130] > Change: 2024-09-27 01:10:51.889971927 +0000
	I0927 01:10:52.035461   51589 command_runner.go:130] >  Birth: -
	I0927 01:10:52.035479   51589 start.go:563] Will wait 60s for crictl version
	I0927 01:10:52.035520   51589 ssh_runner.go:195] Run: which crictl
	I0927 01:10:52.039372   51589 command_runner.go:130] > /usr/bin/crictl
	I0927 01:10:52.039445   51589 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:10:52.081223   51589 command_runner.go:130] > Version:  0.1.0
	I0927 01:10:52.081248   51589 command_runner.go:130] > RuntimeName:  cri-o
	I0927 01:10:52.081253   51589 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0927 01:10:52.081258   51589 command_runner.go:130] > RuntimeApiVersion:  v1
	I0927 01:10:52.081386   51589 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:10:52.081530   51589 ssh_runner.go:195] Run: crio --version
	I0927 01:10:52.114443   51589 command_runner.go:130] > crio version 1.29.1
	I0927 01:10:52.114470   51589 command_runner.go:130] > Version:        1.29.1
	I0927 01:10:52.114478   51589 command_runner.go:130] > GitCommit:      unknown
	I0927 01:10:52.114484   51589 command_runner.go:130] > GitCommitDate:  unknown
	I0927 01:10:52.114496   51589 command_runner.go:130] > GitTreeState:   clean
	I0927 01:10:52.114505   51589 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0927 01:10:52.114511   51589 command_runner.go:130] > GoVersion:      go1.21.6
	I0927 01:10:52.114516   51589 command_runner.go:130] > Compiler:       gc
	I0927 01:10:52.114523   51589 command_runner.go:130] > Platform:       linux/amd64
	I0927 01:10:52.114528   51589 command_runner.go:130] > Linkmode:       dynamic
	I0927 01:10:52.114535   51589 command_runner.go:130] > BuildTags:      
	I0927 01:10:52.114542   51589 command_runner.go:130] >   containers_image_ostree_stub
	I0927 01:10:52.114548   51589 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0927 01:10:52.114553   51589 command_runner.go:130] >   btrfs_noversion
	I0927 01:10:52.114560   51589 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0927 01:10:52.114568   51589 command_runner.go:130] >   libdm_no_deferred_remove
	I0927 01:10:52.114589   51589 command_runner.go:130] >   seccomp
	I0927 01:10:52.114598   51589 command_runner.go:130] > LDFlags:          unknown
	I0927 01:10:52.114602   51589 command_runner.go:130] > SeccompEnabled:   true
	I0927 01:10:52.114606   51589 command_runner.go:130] > AppArmorEnabled:  false
	I0927 01:10:52.114666   51589 ssh_runner.go:195] Run: crio --version
	I0927 01:10:52.144486   51589 command_runner.go:130] > crio version 1.29.1
	I0927 01:10:52.144515   51589 command_runner.go:130] > Version:        1.29.1
	I0927 01:10:52.144523   51589 command_runner.go:130] > GitCommit:      unknown
	I0927 01:10:52.144528   51589 command_runner.go:130] > GitCommitDate:  unknown
	I0927 01:10:52.144534   51589 command_runner.go:130] > GitTreeState:   clean
	I0927 01:10:52.144542   51589 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0927 01:10:52.144547   51589 command_runner.go:130] > GoVersion:      go1.21.6
	I0927 01:10:52.144553   51589 command_runner.go:130] > Compiler:       gc
	I0927 01:10:52.144559   51589 command_runner.go:130] > Platform:       linux/amd64
	I0927 01:10:52.144564   51589 command_runner.go:130] > Linkmode:       dynamic
	I0927 01:10:52.144571   51589 command_runner.go:130] > BuildTags:      
	I0927 01:10:52.144578   51589 command_runner.go:130] >   containers_image_ostree_stub
	I0927 01:10:52.144584   51589 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0927 01:10:52.144590   51589 command_runner.go:130] >   btrfs_noversion
	I0927 01:10:52.144599   51589 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0927 01:10:52.144607   51589 command_runner.go:130] >   libdm_no_deferred_remove
	I0927 01:10:52.144619   51589 command_runner.go:130] >   seccomp
	I0927 01:10:52.144627   51589 command_runner.go:130] > LDFlags:          unknown
	I0927 01:10:52.144635   51589 command_runner.go:130] > SeccompEnabled:   true
	I0927 01:10:52.144644   51589 command_runner.go:130] > AppArmorEnabled:  false
	I0927 01:10:52.146784   51589 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:10:52.148124   51589 main.go:141] libmachine: (multinode-833343) Calling .GetIP
	I0927 01:10:52.150915   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:52.151254   51589 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:10:52.151283   51589 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:10:52.151523   51589 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 01:10:52.156238   51589 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0927 01:10:52.156417   51589 kubeadm.go:883] updating cluster {Name:multinode-833343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-833343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.88 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:10:52.156607   51589 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:10:52.156667   51589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:10:52.198528   51589 command_runner.go:130] > {
	I0927 01:10:52.198554   51589 command_runner.go:130] >   "images": [
	I0927 01:10:52.198564   51589 command_runner.go:130] >     {
	I0927 01:10:52.198575   51589 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0927 01:10:52.198581   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.198589   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0927 01:10:52.198593   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198599   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.198621   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0927 01:10:52.198635   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0927 01:10:52.198641   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198647   51589 command_runner.go:130] >       "size": "87190579",
	I0927 01:10:52.198657   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.198664   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.198676   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.198685   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.198694   51589 command_runner.go:130] >     },
	I0927 01:10:52.198703   51589 command_runner.go:130] >     {
	I0927 01:10:52.198713   51589 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0927 01:10:52.198722   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.198728   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0927 01:10:52.198734   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198738   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.198746   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0927 01:10:52.198757   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0927 01:10:52.198765   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198774   51589 command_runner.go:130] >       "size": "1363676",
	I0927 01:10:52.198783   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.198794   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.198804   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.198813   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.198821   51589 command_runner.go:130] >     },
	I0927 01:10:52.198830   51589 command_runner.go:130] >     {
	I0927 01:10:52.198843   51589 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0927 01:10:52.198852   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.198864   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0927 01:10:52.198872   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198881   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.198892   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0927 01:10:52.198907   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0927 01:10:52.198916   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198926   51589 command_runner.go:130] >       "size": "31470524",
	I0927 01:10:52.198934   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.198941   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.198945   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.198951   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.198955   51589 command_runner.go:130] >     },
	I0927 01:10:52.198959   51589 command_runner.go:130] >     {
	I0927 01:10:52.198965   51589 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0927 01:10:52.198971   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.198975   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0927 01:10:52.198981   51589 command_runner.go:130] >       ],
	I0927 01:10:52.198985   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.198995   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0927 01:10:52.199007   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0927 01:10:52.199012   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199016   51589 command_runner.go:130] >       "size": "63273227",
	I0927 01:10:52.199023   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.199027   51589 command_runner.go:130] >       "username": "nonroot",
	I0927 01:10:52.199033   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199038   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199044   51589 command_runner.go:130] >     },
	I0927 01:10:52.199048   51589 command_runner.go:130] >     {
	I0927 01:10:52.199056   51589 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0927 01:10:52.199062   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199066   51589 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0927 01:10:52.199072   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199076   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199085   51589 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0927 01:10:52.199094   51589 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0927 01:10:52.199099   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199104   51589 command_runner.go:130] >       "size": "149009664",
	I0927 01:10:52.199109   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199113   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.199118   51589 command_runner.go:130] >       },
	I0927 01:10:52.199121   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199125   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199139   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199143   51589 command_runner.go:130] >     },
	I0927 01:10:52.199146   51589 command_runner.go:130] >     {
	I0927 01:10:52.199152   51589 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0927 01:10:52.199158   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199164   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0927 01:10:52.199170   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199174   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199183   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0927 01:10:52.199192   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0927 01:10:52.199197   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199202   51589 command_runner.go:130] >       "size": "95237600",
	I0927 01:10:52.199207   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199212   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.199218   51589 command_runner.go:130] >       },
	I0927 01:10:52.199222   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199229   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199233   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199239   51589 command_runner.go:130] >     },
	I0927 01:10:52.199242   51589 command_runner.go:130] >     {
	I0927 01:10:52.199248   51589 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0927 01:10:52.199254   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199259   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0927 01:10:52.199262   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199266   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199278   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0927 01:10:52.199288   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0927 01:10:52.199293   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199298   51589 command_runner.go:130] >       "size": "89437508",
	I0927 01:10:52.199314   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199323   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.199332   51589 command_runner.go:130] >       },
	I0927 01:10:52.199339   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199343   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199348   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199351   51589 command_runner.go:130] >     },
	I0927 01:10:52.199356   51589 command_runner.go:130] >     {
	I0927 01:10:52.199362   51589 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0927 01:10:52.199368   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199373   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0927 01:10:52.199377   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199384   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199399   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0927 01:10:52.199408   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0927 01:10:52.199412   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199418   51589 command_runner.go:130] >       "size": "92733849",
	I0927 01:10:52.199422   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.199429   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199433   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199437   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199440   51589 command_runner.go:130] >     },
	I0927 01:10:52.199442   51589 command_runner.go:130] >     {
	I0927 01:10:52.199448   51589 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0927 01:10:52.199452   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199457   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0927 01:10:52.199460   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199465   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199476   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0927 01:10:52.199490   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0927 01:10:52.199501   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199510   51589 command_runner.go:130] >       "size": "68420934",
	I0927 01:10:52.199519   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199528   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.199535   51589 command_runner.go:130] >       },
	I0927 01:10:52.199540   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199549   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199558   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.199566   51589 command_runner.go:130] >     },
	I0927 01:10:52.199573   51589 command_runner.go:130] >     {
	I0927 01:10:52.199586   51589 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0927 01:10:52.199593   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.199598   51589 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0927 01:10:52.199604   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199609   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.199617   51589 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0927 01:10:52.199627   51589 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0927 01:10:52.199632   51589 command_runner.go:130] >       ],
	I0927 01:10:52.199636   51589 command_runner.go:130] >       "size": "742080",
	I0927 01:10:52.199642   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.199647   51589 command_runner.go:130] >         "value": "65535"
	I0927 01:10:52.199653   51589 command_runner.go:130] >       },
	I0927 01:10:52.199657   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.199663   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.199666   51589 command_runner.go:130] >       "pinned": true
	I0927 01:10:52.199670   51589 command_runner.go:130] >     }
	I0927 01:10:52.199675   51589 command_runner.go:130] >   ]
	I0927 01:10:52.199678   51589 command_runner.go:130] > }
	I0927 01:10:52.199893   51589 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:10:52.199909   51589 crio.go:433] Images already preloaded, skipping extraction
	I0927 01:10:52.199959   51589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:10:52.241536   51589 command_runner.go:130] > {
	I0927 01:10:52.241556   51589 command_runner.go:130] >   "images": [
	I0927 01:10:52.241560   51589 command_runner.go:130] >     {
	I0927 01:10:52.241570   51589 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0927 01:10:52.241577   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.241586   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0927 01:10:52.241591   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241598   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.241610   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0927 01:10:52.241621   51589 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0927 01:10:52.241626   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241632   51589 command_runner.go:130] >       "size": "87190579",
	I0927 01:10:52.241639   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.241649   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.241659   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.241669   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.241678   51589 command_runner.go:130] >     },
	I0927 01:10:52.241683   51589 command_runner.go:130] >     {
	I0927 01:10:52.241694   51589 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0927 01:10:52.241701   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.241707   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0927 01:10:52.241716   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241722   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.241736   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0927 01:10:52.241751   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0927 01:10:52.241760   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241767   51589 command_runner.go:130] >       "size": "1363676",
	I0927 01:10:52.241776   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.241785   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.241792   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.241799   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.241804   51589 command_runner.go:130] >     },
	I0927 01:10:52.241807   51589 command_runner.go:130] >     {
	I0927 01:10:52.241813   51589 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0927 01:10:52.241819   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.241824   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0927 01:10:52.241828   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241834   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.241846   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0927 01:10:52.241860   51589 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0927 01:10:52.241865   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241874   51589 command_runner.go:130] >       "size": "31470524",
	I0927 01:10:52.241881   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.241890   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.241897   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.241907   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.241912   51589 command_runner.go:130] >     },
	I0927 01:10:52.241921   51589 command_runner.go:130] >     {
	I0927 01:10:52.241930   51589 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0927 01:10:52.241936   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.241941   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0927 01:10:52.241945   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241949   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.241958   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0927 01:10:52.241969   51589 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0927 01:10:52.241975   51589 command_runner.go:130] >       ],
	I0927 01:10:52.241978   51589 command_runner.go:130] >       "size": "63273227",
	I0927 01:10:52.241982   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.241986   51589 command_runner.go:130] >       "username": "nonroot",
	I0927 01:10:52.241991   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.241994   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.241998   51589 command_runner.go:130] >     },
	I0927 01:10:52.242004   51589 command_runner.go:130] >     {
	I0927 01:10:52.242012   51589 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0927 01:10:52.242016   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242023   51589 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0927 01:10:52.242027   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242031   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242037   51589 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0927 01:10:52.242046   51589 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0927 01:10:52.242051   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242055   51589 command_runner.go:130] >       "size": "149009664",
	I0927 01:10:52.242061   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242065   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.242071   51589 command_runner.go:130] >       },
	I0927 01:10:52.242075   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242081   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242085   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242088   51589 command_runner.go:130] >     },
	I0927 01:10:52.242091   51589 command_runner.go:130] >     {
	I0927 01:10:52.242097   51589 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0927 01:10:52.242103   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242108   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0927 01:10:52.242113   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242117   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242130   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0927 01:10:52.242139   51589 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0927 01:10:52.242145   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242150   51589 command_runner.go:130] >       "size": "95237600",
	I0927 01:10:52.242156   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242160   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.242166   51589 command_runner.go:130] >       },
	I0927 01:10:52.242170   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242175   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242179   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242185   51589 command_runner.go:130] >     },
	I0927 01:10:52.242189   51589 command_runner.go:130] >     {
	I0927 01:10:52.242197   51589 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0927 01:10:52.242203   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242209   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0927 01:10:52.242214   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242217   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242227   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0927 01:10:52.242236   51589 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0927 01:10:52.242242   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242246   51589 command_runner.go:130] >       "size": "89437508",
	I0927 01:10:52.242251   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242255   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.242261   51589 command_runner.go:130] >       },
	I0927 01:10:52.242265   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242271   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242275   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242280   51589 command_runner.go:130] >     },
	I0927 01:10:52.242283   51589 command_runner.go:130] >     {
	I0927 01:10:52.242291   51589 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0927 01:10:52.242297   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242301   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0927 01:10:52.242307   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242311   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242326   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0927 01:10:52.242334   51589 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0927 01:10:52.242340   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242344   51589 command_runner.go:130] >       "size": "92733849",
	I0927 01:10:52.242349   51589 command_runner.go:130] >       "uid": null,
	I0927 01:10:52.242353   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242359   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242363   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242368   51589 command_runner.go:130] >     },
	I0927 01:10:52.242373   51589 command_runner.go:130] >     {
	I0927 01:10:52.242381   51589 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0927 01:10:52.242385   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242392   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0927 01:10:52.242395   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242399   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242406   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0927 01:10:52.242416   51589 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0927 01:10:52.242422   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242426   51589 command_runner.go:130] >       "size": "68420934",
	I0927 01:10:52.242432   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242436   51589 command_runner.go:130] >         "value": "0"
	I0927 01:10:52.242441   51589 command_runner.go:130] >       },
	I0927 01:10:52.242444   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242450   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242454   51589 command_runner.go:130] >       "pinned": false
	I0927 01:10:52.242459   51589 command_runner.go:130] >     },
	I0927 01:10:52.242463   51589 command_runner.go:130] >     {
	I0927 01:10:52.242471   51589 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0927 01:10:52.242477   51589 command_runner.go:130] >       "repoTags": [
	I0927 01:10:52.242482   51589 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0927 01:10:52.242488   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242491   51589 command_runner.go:130] >       "repoDigests": [
	I0927 01:10:52.242500   51589 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0927 01:10:52.242509   51589 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0927 01:10:52.242515   51589 command_runner.go:130] >       ],
	I0927 01:10:52.242519   51589 command_runner.go:130] >       "size": "742080",
	I0927 01:10:52.242525   51589 command_runner.go:130] >       "uid": {
	I0927 01:10:52.242529   51589 command_runner.go:130] >         "value": "65535"
	I0927 01:10:52.242534   51589 command_runner.go:130] >       },
	I0927 01:10:52.242538   51589 command_runner.go:130] >       "username": "",
	I0927 01:10:52.242544   51589 command_runner.go:130] >       "spec": null,
	I0927 01:10:52.242547   51589 command_runner.go:130] >       "pinned": true
	I0927 01:10:52.242562   51589 command_runner.go:130] >     }
	I0927 01:10:52.242567   51589 command_runner.go:130] >   ]
	I0927 01:10:52.242571   51589 command_runner.go:130] > }
	I0927 01:10:52.242678   51589 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:10:52.242688   51589 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:10:52.242695   51589 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.1 crio true true} ...
	I0927 01:10:52.242796   51589 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-833343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-833343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:10:52.242858   51589 ssh_runner.go:195] Run: crio config
	I0927 01:10:52.286670   51589 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0927 01:10:52.286695   51589 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0927 01:10:52.286702   51589 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0927 01:10:52.286708   51589 command_runner.go:130] > #
	I0927 01:10:52.286715   51589 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0927 01:10:52.286724   51589 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0927 01:10:52.286732   51589 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0927 01:10:52.286748   51589 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0927 01:10:52.286753   51589 command_runner.go:130] > # reload'.
	I0927 01:10:52.286761   51589 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0927 01:10:52.286770   51589 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0927 01:10:52.286777   51589 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0927 01:10:52.286783   51589 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0927 01:10:52.286788   51589 command_runner.go:130] > [crio]
	I0927 01:10:52.286802   51589 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0927 01:10:52.286816   51589 command_runner.go:130] > # containers images, in this directory.
	I0927 01:10:52.286823   51589 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0927 01:10:52.286838   51589 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0927 01:10:52.286847   51589 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0927 01:10:52.286858   51589 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0927 01:10:52.286870   51589 command_runner.go:130] > # imagestore = ""
	I0927 01:10:52.286880   51589 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0927 01:10:52.286890   51589 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0927 01:10:52.286899   51589 command_runner.go:130] > storage_driver = "overlay"
	I0927 01:10:52.286906   51589 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0927 01:10:52.286912   51589 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0927 01:10:52.286919   51589 command_runner.go:130] > storage_option = [
	I0927 01:10:52.286930   51589 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0927 01:10:52.286936   51589 command_runner.go:130] > ]
	I0927 01:10:52.286946   51589 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0927 01:10:52.286956   51589 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0927 01:10:52.286964   51589 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0927 01:10:52.286975   51589 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0927 01:10:52.286985   51589 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0927 01:10:52.286993   51589 command_runner.go:130] > # always happen on a node reboot
	I0927 01:10:52.286998   51589 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0927 01:10:52.287016   51589 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0927 01:10:52.287028   51589 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0927 01:10:52.287036   51589 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0927 01:10:52.287047   51589 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0927 01:10:52.287059   51589 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0927 01:10:52.287073   51589 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0927 01:10:52.287080   51589 command_runner.go:130] > # internal_wipe = true
	I0927 01:10:52.287097   51589 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0927 01:10:52.287108   51589 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0927 01:10:52.287115   51589 command_runner.go:130] > # internal_repair = false
	I0927 01:10:52.287137   51589 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0927 01:10:52.287150   51589 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0927 01:10:52.287162   51589 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0927 01:10:52.287171   51589 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0927 01:10:52.287183   51589 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0927 01:10:52.287192   51589 command_runner.go:130] > [crio.api]
	I0927 01:10:52.287201   51589 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0927 01:10:52.287212   51589 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0927 01:10:52.287221   51589 command_runner.go:130] > # IP address on which the stream server will listen.
	I0927 01:10:52.287231   51589 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0927 01:10:52.287242   51589 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0927 01:10:52.287253   51589 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0927 01:10:52.287263   51589 command_runner.go:130] > # stream_port = "0"
	I0927 01:10:52.287272   51589 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0927 01:10:52.287280   51589 command_runner.go:130] > # stream_enable_tls = false
	I0927 01:10:52.287286   51589 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0927 01:10:52.287294   51589 command_runner.go:130] > # stream_idle_timeout = ""
	I0927 01:10:52.287313   51589 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0927 01:10:52.287327   51589 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0927 01:10:52.287333   51589 command_runner.go:130] > # minutes.
	I0927 01:10:52.287340   51589 command_runner.go:130] > # stream_tls_cert = ""
	I0927 01:10:52.287352   51589 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0927 01:10:52.287364   51589 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0927 01:10:52.287372   51589 command_runner.go:130] > # stream_tls_key = ""
	I0927 01:10:52.287378   51589 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0927 01:10:52.287386   51589 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0927 01:10:52.287404   51589 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0927 01:10:52.287410   51589 command_runner.go:130] > # stream_tls_ca = ""
	I0927 01:10:52.287420   51589 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0927 01:10:52.287432   51589 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0927 01:10:52.287442   51589 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0927 01:10:52.287451   51589 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0927 01:10:52.287458   51589 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0927 01:10:52.287469   51589 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0927 01:10:52.287474   51589 command_runner.go:130] > [crio.runtime]
	I0927 01:10:52.287485   51589 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0927 01:10:52.287496   51589 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0927 01:10:52.287503   51589 command_runner.go:130] > # "nofile=1024:2048"
	I0927 01:10:52.287512   51589 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0927 01:10:52.287522   51589 command_runner.go:130] > # default_ulimits = [
	I0927 01:10:52.287528   51589 command_runner.go:130] > # ]
	I0927 01:10:52.287540   51589 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0927 01:10:52.287549   51589 command_runner.go:130] > # no_pivot = false
	I0927 01:10:52.287560   51589 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0927 01:10:52.287573   51589 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0927 01:10:52.287583   51589 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0927 01:10:52.287593   51589 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0927 01:10:52.287604   51589 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0927 01:10:52.287618   51589 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0927 01:10:52.287630   51589 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0927 01:10:52.287638   51589 command_runner.go:130] > # Cgroup setting for conmon
	I0927 01:10:52.287650   51589 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0927 01:10:52.287659   51589 command_runner.go:130] > conmon_cgroup = "pod"
	I0927 01:10:52.287669   51589 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0927 01:10:52.287680   51589 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0927 01:10:52.287694   51589 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0927 01:10:52.287703   51589 command_runner.go:130] > conmon_env = [
	I0927 01:10:52.287715   51589 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0927 01:10:52.287723   51589 command_runner.go:130] > ]
	I0927 01:10:52.287730   51589 command_runner.go:130] > # Additional environment variables to set for all the
	I0927 01:10:52.287741   51589 command_runner.go:130] > # containers. These are overridden if set in the
	I0927 01:10:52.287751   51589 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0927 01:10:52.287759   51589 command_runner.go:130] > # default_env = [
	I0927 01:10:52.287765   51589 command_runner.go:130] > # ]
	I0927 01:10:52.287777   51589 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0927 01:10:52.287791   51589 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0927 01:10:52.287799   51589 command_runner.go:130] > # selinux = false
	I0927 01:10:52.287808   51589 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0927 01:10:52.287821   51589 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0927 01:10:52.287832   51589 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0927 01:10:52.287841   51589 command_runner.go:130] > # seccomp_profile = ""
	I0927 01:10:52.287850   51589 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0927 01:10:52.287862   51589 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0927 01:10:52.287875   51589 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0927 01:10:52.287884   51589 command_runner.go:130] > # which might increase security.
	I0927 01:10:52.287892   51589 command_runner.go:130] > # This option is currently deprecated,
	I0927 01:10:52.287903   51589 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0927 01:10:52.287911   51589 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0927 01:10:52.287930   51589 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0927 01:10:52.287945   51589 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0927 01:10:52.287956   51589 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0927 01:10:52.287968   51589 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0927 01:10:52.287980   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.287991   51589 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0927 01:10:52.288000   51589 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0927 01:10:52.288010   51589 command_runner.go:130] > # the cgroup blockio controller.
	I0927 01:10:52.288016   51589 command_runner.go:130] > # blockio_config_file = ""
	I0927 01:10:52.288029   51589 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0927 01:10:52.288038   51589 command_runner.go:130] > # blockio parameters.
	I0927 01:10:52.288045   51589 command_runner.go:130] > # blockio_reload = false
	I0927 01:10:52.288058   51589 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0927 01:10:52.288067   51589 command_runner.go:130] > # irqbalance daemon.
	I0927 01:10:52.288075   51589 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0927 01:10:52.288087   51589 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0927 01:10:52.288097   51589 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0927 01:10:52.288110   51589 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0927 01:10:52.288122   51589 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0927 01:10:52.288141   51589 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0927 01:10:52.288151   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.288158   51589 command_runner.go:130] > # rdt_config_file = ""
	I0927 01:10:52.288169   51589 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0927 01:10:52.288181   51589 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0927 01:10:52.288206   51589 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0927 01:10:52.288216   51589 command_runner.go:130] > # separate_pull_cgroup = ""
	I0927 01:10:52.288226   51589 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0927 01:10:52.288238   51589 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0927 01:10:52.288247   51589 command_runner.go:130] > # will be added.
	I0927 01:10:52.288255   51589 command_runner.go:130] > # default_capabilities = [
	I0927 01:10:52.288264   51589 command_runner.go:130] > # 	"CHOWN",
	I0927 01:10:52.288271   51589 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0927 01:10:52.288279   51589 command_runner.go:130] > # 	"FSETID",
	I0927 01:10:52.288285   51589 command_runner.go:130] > # 	"FOWNER",
	I0927 01:10:52.288293   51589 command_runner.go:130] > # 	"SETGID",
	I0927 01:10:52.288300   51589 command_runner.go:130] > # 	"SETUID",
	I0927 01:10:52.288309   51589 command_runner.go:130] > # 	"SETPCAP",
	I0927 01:10:52.288316   51589 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0927 01:10:52.288324   51589 command_runner.go:130] > # 	"KILL",
	I0927 01:10:52.288330   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288342   51589 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0927 01:10:52.288355   51589 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0927 01:10:52.288363   51589 command_runner.go:130] > # add_inheritable_capabilities = false
	I0927 01:10:52.288376   51589 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0927 01:10:52.288388   51589 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0927 01:10:52.288394   51589 command_runner.go:130] > default_sysctls = [
	I0927 01:10:52.288405   51589 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0927 01:10:52.288412   51589 command_runner.go:130] > ]
	I0927 01:10:52.288420   51589 command_runner.go:130] > # List of devices on the host that a
	I0927 01:10:52.288431   51589 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0927 01:10:52.288441   51589 command_runner.go:130] > # allowed_devices = [
	I0927 01:10:52.288447   51589 command_runner.go:130] > # 	"/dev/fuse",
	I0927 01:10:52.288454   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288462   51589 command_runner.go:130] > # List of additional devices. specified as
	I0927 01:10:52.288475   51589 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0927 01:10:52.288487   51589 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0927 01:10:52.288506   51589 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0927 01:10:52.288515   51589 command_runner.go:130] > # additional_devices = [
	I0927 01:10:52.288521   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288531   51589 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0927 01:10:52.288538   51589 command_runner.go:130] > # cdi_spec_dirs = [
	I0927 01:10:52.288544   51589 command_runner.go:130] > # 	"/etc/cdi",
	I0927 01:10:52.288555   51589 command_runner.go:130] > # 	"/var/run/cdi",
	I0927 01:10:52.288561   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288571   51589 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0927 01:10:52.288584   51589 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0927 01:10:52.288593   51589 command_runner.go:130] > # Defaults to false.
	I0927 01:10:52.288601   51589 command_runner.go:130] > # device_ownership_from_security_context = false
	I0927 01:10:52.288613   51589 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0927 01:10:52.288625   51589 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0927 01:10:52.288632   51589 command_runner.go:130] > # hooks_dir = [
	I0927 01:10:52.288642   51589 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0927 01:10:52.288648   51589 command_runner.go:130] > # ]
	I0927 01:10:52.288658   51589 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0927 01:10:52.288670   51589 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0927 01:10:52.288681   51589 command_runner.go:130] > # its default mounts from the following two files:
	I0927 01:10:52.288689   51589 command_runner.go:130] > #
	I0927 01:10:52.288698   51589 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0927 01:10:52.288713   51589 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0927 01:10:52.288725   51589 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0927 01:10:52.288731   51589 command_runner.go:130] > #
	I0927 01:10:52.288742   51589 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0927 01:10:52.288755   51589 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0927 01:10:52.288768   51589 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0927 01:10:52.288780   51589 command_runner.go:130] > #      only add mounts it finds in this file.
	I0927 01:10:52.288786   51589 command_runner.go:130] > #
	I0927 01:10:52.288793   51589 command_runner.go:130] > # default_mounts_file = ""
	I0927 01:10:52.288804   51589 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0927 01:10:52.288817   51589 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0927 01:10:52.288828   51589 command_runner.go:130] > pids_limit = 1024
	I0927 01:10:52.288837   51589 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0927 01:10:52.288846   51589 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0927 01:10:52.288860   51589 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0927 01:10:52.288877   51589 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0927 01:10:52.288885   51589 command_runner.go:130] > # log_size_max = -1
	I0927 01:10:52.288896   51589 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0927 01:10:52.288906   51589 command_runner.go:130] > # log_to_journald = false
	I0927 01:10:52.288916   51589 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0927 01:10:52.288925   51589 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0927 01:10:52.288933   51589 command_runner.go:130] > # Path to directory for container attach sockets.
	I0927 01:10:52.288943   51589 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0927 01:10:52.288952   51589 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0927 01:10:52.288961   51589 command_runner.go:130] > # bind_mount_prefix = ""
	I0927 01:10:52.288971   51589 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0927 01:10:52.288980   51589 command_runner.go:130] > # read_only = false
	I0927 01:10:52.288990   51589 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0927 01:10:52.289002   51589 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0927 01:10:52.289009   51589 command_runner.go:130] > # live configuration reload.
	I0927 01:10:52.289019   51589 command_runner.go:130] > # log_level = "info"
	I0927 01:10:52.289027   51589 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0927 01:10:52.289038   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.289048   51589 command_runner.go:130] > # log_filter = ""
	I0927 01:10:52.289056   51589 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0927 01:10:52.289068   51589 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0927 01:10:52.289079   51589 command_runner.go:130] > # separated by comma.
	I0927 01:10:52.289090   51589 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 01:10:52.289099   51589 command_runner.go:130] > # uid_mappings = ""
	I0927 01:10:52.289108   51589 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0927 01:10:52.289121   51589 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0927 01:10:52.289134   51589 command_runner.go:130] > # separated by comma.
	I0927 01:10:52.289147   51589 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 01:10:52.289157   51589 command_runner.go:130] > # gid_mappings = ""
	I0927 01:10:52.289168   51589 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0927 01:10:52.289176   51589 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0927 01:10:52.289186   51589 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0927 01:10:52.289201   51589 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 01:10:52.289211   51589 command_runner.go:130] > # minimum_mappable_uid = -1
	I0927 01:10:52.289224   51589 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0927 01:10:52.289236   51589 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0927 01:10:52.289248   51589 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0927 01:10:52.289261   51589 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 01:10:52.289271   51589 command_runner.go:130] > # minimum_mappable_gid = -1
	I0927 01:10:52.289281   51589 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0927 01:10:52.289293   51589 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0927 01:10:52.289305   51589 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0927 01:10:52.289315   51589 command_runner.go:130] > # ctr_stop_timeout = 30
	I0927 01:10:52.289325   51589 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0927 01:10:52.289337   51589 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0927 01:10:52.289348   51589 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0927 01:10:52.289359   51589 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0927 01:10:52.289367   51589 command_runner.go:130] > drop_infra_ctr = false
	I0927 01:10:52.289377   51589 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0927 01:10:52.289388   51589 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0927 01:10:52.289402   51589 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0927 01:10:52.289412   51589 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0927 01:10:52.289423   51589 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0927 01:10:52.289434   51589 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0927 01:10:52.289446   51589 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0927 01:10:52.289454   51589 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0927 01:10:52.289459   51589 command_runner.go:130] > # shared_cpuset = ""
	I0927 01:10:52.289472   51589 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0927 01:10:52.289483   51589 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0927 01:10:52.289489   51589 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0927 01:10:52.289503   51589 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0927 01:10:52.289513   51589 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0927 01:10:52.289523   51589 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0927 01:10:52.289535   51589 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0927 01:10:52.289545   51589 command_runner.go:130] > # enable_criu_support = false
	I0927 01:10:52.289551   51589 command_runner.go:130] > # Enable/disable the generation of the container,
	I0927 01:10:52.289563   51589 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0927 01:10:52.289571   51589 command_runner.go:130] > # enable_pod_events = false
	I0927 01:10:52.289583   51589 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0927 01:10:52.289596   51589 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0927 01:10:52.289607   51589 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0927 01:10:52.289616   51589 command_runner.go:130] > # default_runtime = "runc"
	I0927 01:10:52.289624   51589 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0927 01:10:52.289637   51589 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0927 01:10:52.289649   51589 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0927 01:10:52.289660   51589 command_runner.go:130] > # creation as a file is not desired either.
	I0927 01:10:52.289675   51589 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0927 01:10:52.289686   51589 command_runner.go:130] > # the hostname is being managed dynamically.
	I0927 01:10:52.289695   51589 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0927 01:10:52.289704   51589 command_runner.go:130] > # ]
	I0927 01:10:52.289714   51589 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0927 01:10:52.289724   51589 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0927 01:10:52.289730   51589 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0927 01:10:52.289741   51589 command_runner.go:130] > # Each entry in the table should follow the format:
	I0927 01:10:52.289748   51589 command_runner.go:130] > #
	I0927 01:10:52.289756   51589 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0927 01:10:52.289767   51589 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0927 01:10:52.289800   51589 command_runner.go:130] > # runtime_type = "oci"
	I0927 01:10:52.289810   51589 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0927 01:10:52.289817   51589 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0927 01:10:52.289824   51589 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0927 01:10:52.289829   51589 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0927 01:10:52.289837   51589 command_runner.go:130] > # monitor_env = []
	I0927 01:10:52.289845   51589 command_runner.go:130] > # privileged_without_host_devices = false
	I0927 01:10:52.289855   51589 command_runner.go:130] > # allowed_annotations = []
	I0927 01:10:52.289865   51589 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0927 01:10:52.289874   51589 command_runner.go:130] > # Where:
	I0927 01:10:52.289882   51589 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0927 01:10:52.289895   51589 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0927 01:10:52.289906   51589 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0927 01:10:52.289913   51589 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0927 01:10:52.289919   51589 command_runner.go:130] > #   in $PATH.
	I0927 01:10:52.289932   51589 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0927 01:10:52.289941   51589 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0927 01:10:52.289953   51589 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0927 01:10:52.289961   51589 command_runner.go:130] > #   state.
	I0927 01:10:52.289971   51589 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0927 01:10:52.289983   51589 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0927 01:10:52.289991   51589 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0927 01:10:52.289999   51589 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0927 01:10:52.290012   51589 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0927 01:10:52.290025   51589 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0927 01:10:52.290033   51589 command_runner.go:130] > #   The currently recognized values are:
	I0927 01:10:52.290046   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0927 01:10:52.290061   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0927 01:10:52.290073   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0927 01:10:52.290084   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0927 01:10:52.290095   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0927 01:10:52.290102   51589 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0927 01:10:52.290115   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0927 01:10:52.290132   51589 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0927 01:10:52.290144   51589 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0927 01:10:52.290157   51589 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0927 01:10:52.290167   51589 command_runner.go:130] > #   deprecated option "conmon".
	I0927 01:10:52.290180   51589 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0927 01:10:52.290190   51589 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0927 01:10:52.290200   51589 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0927 01:10:52.290206   51589 command_runner.go:130] > #   should be moved to the container's cgroup
	I0927 01:10:52.290222   51589 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0927 01:10:52.290234   51589 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0927 01:10:52.290246   51589 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0927 01:10:52.290258   51589 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0927 01:10:52.290266   51589 command_runner.go:130] > #
	I0927 01:10:52.290274   51589 command_runner.go:130] > # Using the seccomp notifier feature:
	I0927 01:10:52.290280   51589 command_runner.go:130] > #
	I0927 01:10:52.290286   51589 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0927 01:10:52.290297   51589 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0927 01:10:52.290306   51589 command_runner.go:130] > #
	I0927 01:10:52.290316   51589 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0927 01:10:52.290328   51589 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0927 01:10:52.290333   51589 command_runner.go:130] > #
	I0927 01:10:52.290346   51589 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0927 01:10:52.290354   51589 command_runner.go:130] > # feature.
	I0927 01:10:52.290359   51589 command_runner.go:130] > #
	I0927 01:10:52.290369   51589 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0927 01:10:52.290379   51589 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0927 01:10:52.290391   51589 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0927 01:10:52.290403   51589 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0927 01:10:52.290416   51589 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0927 01:10:52.290424   51589 command_runner.go:130] > #
	I0927 01:10:52.290438   51589 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0927 01:10:52.290450   51589 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0927 01:10:52.290457   51589 command_runner.go:130] > #
	I0927 01:10:52.290464   51589 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0927 01:10:52.290473   51589 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0927 01:10:52.290479   51589 command_runner.go:130] > #
	I0927 01:10:52.290492   51589 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0927 01:10:52.290502   51589 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0927 01:10:52.290511   51589 command_runner.go:130] > # limitation.
	I0927 01:10:52.290518   51589 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0927 01:10:52.290528   51589 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0927 01:10:52.290536   51589 command_runner.go:130] > runtime_type = "oci"
	I0927 01:10:52.290546   51589 command_runner.go:130] > runtime_root = "/run/runc"
	I0927 01:10:52.290555   51589 command_runner.go:130] > runtime_config_path = ""
	I0927 01:10:52.290562   51589 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0927 01:10:52.290568   51589 command_runner.go:130] > monitor_cgroup = "pod"
	I0927 01:10:52.290576   51589 command_runner.go:130] > monitor_exec_cgroup = ""
	I0927 01:10:52.290583   51589 command_runner.go:130] > monitor_env = [
	I0927 01:10:52.290595   51589 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0927 01:10:52.290600   51589 command_runner.go:130] > ]
	I0927 01:10:52.290611   51589 command_runner.go:130] > privileged_without_host_devices = false
	I0927 01:10:52.290622   51589 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0927 01:10:52.290634   51589 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0927 01:10:52.290646   51589 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0927 01:10:52.290659   51589 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0927 01:10:52.290669   51589 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0927 01:10:52.290677   51589 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0927 01:10:52.290695   51589 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0927 01:10:52.290710   51589 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0927 01:10:52.290721   51589 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0927 01:10:52.290736   51589 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0927 01:10:52.290744   51589 command_runner.go:130] > # Example:
	I0927 01:10:52.290751   51589 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0927 01:10:52.290758   51589 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0927 01:10:52.290766   51589 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0927 01:10:52.290777   51589 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0927 01:10:52.290786   51589 command_runner.go:130] > # cpuset = 0
	I0927 01:10:52.290793   51589 command_runner.go:130] > # cpushares = "0-1"
	I0927 01:10:52.290801   51589 command_runner.go:130] > # Where:
	I0927 01:10:52.290809   51589 command_runner.go:130] > # The workload name is workload-type.
	I0927 01:10:52.290823   51589 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0927 01:10:52.290834   51589 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0927 01:10:52.290842   51589 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0927 01:10:52.290851   51589 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0927 01:10:52.290864   51589 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0927 01:10:52.290875   51589 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0927 01:10:52.290888   51589 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0927 01:10:52.290898   51589 command_runner.go:130] > # Default value is set to true
	I0927 01:10:52.290905   51589 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0927 01:10:52.290917   51589 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0927 01:10:52.290925   51589 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0927 01:10:52.290930   51589 command_runner.go:130] > # Default value is set to 'false'
	I0927 01:10:52.290938   51589 command_runner.go:130] > # disable_hostport_mapping = false
	I0927 01:10:52.290950   51589 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0927 01:10:52.290956   51589 command_runner.go:130] > #
	I0927 01:10:52.290968   51589 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0927 01:10:52.290980   51589 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0927 01:10:52.290990   51589 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0927 01:10:52.291000   51589 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0927 01:10:52.291009   51589 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0927 01:10:52.291014   51589 command_runner.go:130] > [crio.image]
	I0927 01:10:52.291020   51589 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0927 01:10:52.291024   51589 command_runner.go:130] > # default_transport = "docker://"
	I0927 01:10:52.291033   51589 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0927 01:10:52.291042   51589 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0927 01:10:52.291049   51589 command_runner.go:130] > # global_auth_file = ""
	I0927 01:10:52.291059   51589 command_runner.go:130] > # The image used to instantiate infra containers.
	I0927 01:10:52.291067   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.291074   51589 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0927 01:10:52.291083   51589 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0927 01:10:52.291091   51589 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0927 01:10:52.291097   51589 command_runner.go:130] > # This option supports live configuration reload.
	I0927 01:10:52.291104   51589 command_runner.go:130] > # pause_image_auth_file = ""
	I0927 01:10:52.291113   51589 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0927 01:10:52.291123   51589 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0927 01:10:52.291136   51589 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0927 01:10:52.291146   51589 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0927 01:10:52.291154   51589 command_runner.go:130] > # pause_command = "/pause"
	I0927 01:10:52.291163   51589 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0927 01:10:52.291173   51589 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0927 01:10:52.291182   51589 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0927 01:10:52.291188   51589 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0927 01:10:52.291195   51589 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0927 01:10:52.291204   51589 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0927 01:10:52.291212   51589 command_runner.go:130] > # pinned_images = [
	I0927 01:10:52.291218   51589 command_runner.go:130] > # ]
	I0927 01:10:52.291228   51589 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0927 01:10:52.291241   51589 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0927 01:10:52.291251   51589 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0927 01:10:52.291265   51589 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0927 01:10:52.291275   51589 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0927 01:10:52.291282   51589 command_runner.go:130] > # signature_policy = ""
	I0927 01:10:52.291294   51589 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0927 01:10:52.291319   51589 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0927 01:10:52.291330   51589 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0927 01:10:52.291341   51589 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0927 01:10:52.291353   51589 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0927 01:10:52.291363   51589 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0927 01:10:52.291376   51589 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0927 01:10:52.291389   51589 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0927 01:10:52.291399   51589 command_runner.go:130] > # changing them here.
	I0927 01:10:52.291408   51589 command_runner.go:130] > # insecure_registries = [
	I0927 01:10:52.291416   51589 command_runner.go:130] > # ]
	I0927 01:10:52.291426   51589 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0927 01:10:52.291436   51589 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0927 01:10:52.291443   51589 command_runner.go:130] > # image_volumes = "mkdir"
	I0927 01:10:52.291453   51589 command_runner.go:130] > # Temporary directory to use for storing big files
	I0927 01:10:52.291466   51589 command_runner.go:130] > # big_files_temporary_dir = ""
	I0927 01:10:52.291475   51589 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0927 01:10:52.291479   51589 command_runner.go:130] > # CNI plugins.
	I0927 01:10:52.291485   51589 command_runner.go:130] > [crio.network]
	I0927 01:10:52.291497   51589 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0927 01:10:52.291511   51589 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0927 01:10:52.291520   51589 command_runner.go:130] > # cni_default_network = ""
	I0927 01:10:52.291529   51589 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0927 01:10:52.291539   51589 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0927 01:10:52.291548   51589 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0927 01:10:52.291556   51589 command_runner.go:130] > # plugin_dirs = [
	I0927 01:10:52.291561   51589 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0927 01:10:52.291566   51589 command_runner.go:130] > # ]
	I0927 01:10:52.291575   51589 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0927 01:10:52.291584   51589 command_runner.go:130] > [crio.metrics]
	I0927 01:10:52.291591   51589 command_runner.go:130] > # Globally enable or disable metrics support.
	I0927 01:10:52.291601   51589 command_runner.go:130] > enable_metrics = true
	I0927 01:10:52.291612   51589 command_runner.go:130] > # Specify enabled metrics collectors.
	I0927 01:10:52.291622   51589 command_runner.go:130] > # Per default all metrics are enabled.
	I0927 01:10:52.291634   51589 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0927 01:10:52.291646   51589 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0927 01:10:52.291655   51589 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0927 01:10:52.291664   51589 command_runner.go:130] > # metrics_collectors = [
	I0927 01:10:52.291673   51589 command_runner.go:130] > # 	"operations",
	I0927 01:10:52.291684   51589 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0927 01:10:52.291694   51589 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0927 01:10:52.291703   51589 command_runner.go:130] > # 	"operations_errors",
	I0927 01:10:52.291712   51589 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0927 01:10:52.291721   51589 command_runner.go:130] > # 	"image_pulls_by_name",
	I0927 01:10:52.291731   51589 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0927 01:10:52.291738   51589 command_runner.go:130] > # 	"image_pulls_failures",
	I0927 01:10:52.291742   51589 command_runner.go:130] > # 	"image_pulls_successes",
	I0927 01:10:52.291752   51589 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0927 01:10:52.291761   51589 command_runner.go:130] > # 	"image_layer_reuse",
	I0927 01:10:52.291769   51589 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0927 01:10:52.291779   51589 command_runner.go:130] > # 	"containers_oom_total",
	I0927 01:10:52.291786   51589 command_runner.go:130] > # 	"containers_oom",
	I0927 01:10:52.291795   51589 command_runner.go:130] > # 	"processes_defunct",
	I0927 01:10:52.291803   51589 command_runner.go:130] > # 	"operations_total",
	I0927 01:10:52.291812   51589 command_runner.go:130] > # 	"operations_latency_seconds",
	I0927 01:10:52.291819   51589 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0927 01:10:52.291827   51589 command_runner.go:130] > # 	"operations_errors_total",
	I0927 01:10:52.291832   51589 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0927 01:10:52.291840   51589 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0927 01:10:52.291847   51589 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0927 01:10:52.291858   51589 command_runner.go:130] > # 	"image_pulls_success_total",
	I0927 01:10:52.291866   51589 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0927 01:10:52.291876   51589 command_runner.go:130] > # 	"containers_oom_count_total",
	I0927 01:10:52.291884   51589 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0927 01:10:52.291895   51589 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0927 01:10:52.291902   51589 command_runner.go:130] > # ]
	I0927 01:10:52.291911   51589 command_runner.go:130] > # The port on which the metrics server will listen.
	I0927 01:10:52.291920   51589 command_runner.go:130] > # metrics_port = 9090
	I0927 01:10:52.291928   51589 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0927 01:10:52.291936   51589 command_runner.go:130] > # metrics_socket = ""
	I0927 01:10:52.291943   51589 command_runner.go:130] > # The certificate for the secure metrics server.
	I0927 01:10:52.291955   51589 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0927 01:10:52.291968   51589 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0927 01:10:52.291978   51589 command_runner.go:130] > # certificate on any modification event.
	I0927 01:10:52.291987   51589 command_runner.go:130] > # metrics_cert = ""
	I0927 01:10:52.291997   51589 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0927 01:10:52.292008   51589 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0927 01:10:52.292017   51589 command_runner.go:130] > # metrics_key = ""
	I0927 01:10:52.292026   51589 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0927 01:10:52.292031   51589 command_runner.go:130] > [crio.tracing]
	I0927 01:10:52.292043   51589 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0927 01:10:52.292053   51589 command_runner.go:130] > # enable_tracing = false
	I0927 01:10:52.292061   51589 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0927 01:10:52.292071   51589 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0927 01:10:52.292085   51589 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0927 01:10:52.292094   51589 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0927 01:10:52.292104   51589 command_runner.go:130] > # CRI-O NRI configuration.
	I0927 01:10:52.292113   51589 command_runner.go:130] > [crio.nri]
	I0927 01:10:52.292120   51589 command_runner.go:130] > # Globally enable or disable NRI.
	I0927 01:10:52.292124   51589 command_runner.go:130] > # enable_nri = false
	I0927 01:10:52.292132   51589 command_runner.go:130] > # NRI socket to listen on.
	I0927 01:10:52.292143   51589 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0927 01:10:52.292153   51589 command_runner.go:130] > # NRI plugin directory to use.
	I0927 01:10:52.292161   51589 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0927 01:10:52.292172   51589 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0927 01:10:52.292183   51589 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0927 01:10:52.292194   51589 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0927 01:10:52.292203   51589 command_runner.go:130] > # nri_disable_connections = false
	I0927 01:10:52.292214   51589 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0927 01:10:52.292223   51589 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0927 01:10:52.292231   51589 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0927 01:10:52.292242   51589 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0927 01:10:52.292256   51589 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0927 01:10:52.292265   51589 command_runner.go:130] > [crio.stats]
	I0927 01:10:52.292278   51589 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0927 01:10:52.292293   51589 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0927 01:10:52.292303   51589 command_runner.go:130] > # stats_collection_period = 0
	I0927 01:10:52.292330   51589 command_runner.go:130] ! time="2024-09-27 01:10:52.249317077Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0927 01:10:52.292350   51589 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0927 01:10:52.292435   51589 cni.go:84] Creating CNI manager for ""
	I0927 01:10:52.292450   51589 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0927 01:10:52.292460   51589 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:10:52.292489   51589 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-833343 NodeName:multinode-833343 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:10:52.292645   51589 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-833343"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:10:52.292712   51589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:10:52.302664   51589 command_runner.go:130] > kubeadm
	I0927 01:10:52.302685   51589 command_runner.go:130] > kubectl
	I0927 01:10:52.302693   51589 command_runner.go:130] > kubelet
	I0927 01:10:52.302712   51589 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:10:52.302761   51589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:10:52.311906   51589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0927 01:10:52.328741   51589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:10:52.345113   51589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0927 01:10:52.362036   51589 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0927 01:10:52.365903   51589 command_runner.go:130] > 192.168.39.203	control-plane.minikube.internal
	I0927 01:10:52.366127   51589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:10:52.502117   51589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:10:52.516551   51589 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343 for IP: 192.168.39.203
	I0927 01:10:52.516574   51589 certs.go:194] generating shared ca certs ...
	I0927 01:10:52.516593   51589 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:10:52.516735   51589 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:10:52.516787   51589 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:10:52.516799   51589 certs.go:256] generating profile certs ...
	I0927 01:10:52.516894   51589 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/client.key
	I0927 01:10:52.516981   51589 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.key.9a165d03
	I0927 01:10:52.517026   51589 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.key
	I0927 01:10:52.517042   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 01:10:52.517062   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 01:10:52.517079   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 01:10:52.517096   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 01:10:52.517113   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 01:10:52.517146   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 01:10:52.517164   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 01:10:52.517178   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 01:10:52.517244   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:10:52.517288   51589 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:10:52.517301   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:10:52.517335   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:10:52.517367   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:10:52.517398   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:10:52.517453   51589 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:10:52.517490   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.517516   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.517534   51589 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem -> /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.518144   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:10:52.543500   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:10:52.569028   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:10:52.593566   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:10:52.618361   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:10:52.642366   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:10:52.668271   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:10:52.692436   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/multinode-833343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:10:52.717822   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:10:52.742674   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:10:52.767220   51589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:10:52.792283   51589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:10:52.808906   51589 ssh_runner.go:195] Run: openssl version
	I0927 01:10:52.814744   51589 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0927 01:10:52.814813   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:10:52.825656   51589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.830091   51589 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.830186   51589 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.830236   51589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:10:52.835608   51589 command_runner.go:130] > 3ec20f2e
	I0927 01:10:52.835830   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:10:52.844943   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:10:52.855761   51589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.860340   51589 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.860375   51589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.860428   51589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:10:52.866130   51589 command_runner.go:130] > b5213941
	I0927 01:10:52.866259   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:10:52.875803   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:10:52.886686   51589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.891042   51589 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.891135   51589 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.891169   51589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:10:52.896630   51589 command_runner.go:130] > 51391683
	I0927 01:10:52.896803   51589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:10:52.905897   51589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:10:52.910324   51589 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:10:52.910347   51589 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0927 01:10:52.910355   51589 command_runner.go:130] > Device: 253,1	Inode: 2101800     Links: 1
	I0927 01:10:52.910367   51589 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0927 01:10:52.910376   51589 command_runner.go:130] > Access: 2024-09-27 01:04:01.612690520 +0000
	I0927 01:10:52.910387   51589 command_runner.go:130] > Modify: 2024-09-27 01:04:01.612690520 +0000
	I0927 01:10:52.910395   51589 command_runner.go:130] > Change: 2024-09-27 01:04:01.612690520 +0000
	I0927 01:10:52.910406   51589 command_runner.go:130] >  Birth: 2024-09-27 01:04:01.612690520 +0000
	I0927 01:10:52.910467   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:10:52.916133   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.916200   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:10:52.921767   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.921951   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:10:52.927572   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.927828   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:10:52.933455   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.933661   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:10:52.939375   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.939436   51589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:10:52.945102   51589 command_runner.go:130] > Certificate will not expire
	I0927 01:10:52.945159   51589 kubeadm.go:392] StartCluster: {Name:multinode-833343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-833343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.88 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:10:52.945269   51589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:10:52.945328   51589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:10:52.981948   51589 command_runner.go:130] > 3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7
	I0927 01:10:52.981969   51589 command_runner.go:130] > 02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0
	I0927 01:10:52.981975   51589 command_runner.go:130] > 9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06
	I0927 01:10:52.981983   51589 command_runner.go:130] > 51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe
	I0927 01:10:52.981990   51589 command_runner.go:130] > e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf
	I0927 01:10:52.981995   51589 command_runner.go:130] > 15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8
	I0927 01:10:52.982000   51589 command_runner.go:130] > a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e
	I0927 01:10:52.982007   51589 command_runner.go:130] > 0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1
	I0927 01:10:52.983491   51589 cri.go:89] found id: "3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7"
	I0927 01:10:52.983506   51589 cri.go:89] found id: "02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0"
	I0927 01:10:52.983510   51589 cri.go:89] found id: "9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06"
	I0927 01:10:52.983513   51589 cri.go:89] found id: "51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe"
	I0927 01:10:52.983516   51589 cri.go:89] found id: "e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf"
	I0927 01:10:52.983519   51589 cri.go:89] found id: "15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8"
	I0927 01:10:52.983522   51589 cri.go:89] found id: "a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e"
	I0927 01:10:52.983524   51589 cri.go:89] found id: "0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1"
	I0927 01:10:52.983527   51589 cri.go:89] found id: ""
	I0927 01:10:52.983564   51589 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.865590468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399703865567114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11350792-748d-421f-9a23-88413c333f47 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.866185573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d191694b-8f2a-4346-8a13-7f6886add449 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.866265026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d191694b-8f2a-4346-8a13-7f6886add449 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.866624543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33b19bbc348c651a15977ac698195dcfd69096687843e4d5b8273a5279639f7f,PodSandboxId:745e3599198c2331e834fbb32a2305c0cbbede8f443bbc1c9fc60b7e74d32a15,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727399493536289086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1,PodSandboxId:11b08d025386ef2257bdf36d6d621817ab3bc0c25bfc28c03442e5ae0efce54c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727399460120549553,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5,PodSandboxId:fcb963144f5921c3632172f66fdad7d3022c2ed6dba06e15b6313db1572dacb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727399460030293837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02d26889f335cc07145daf774288e41017072373a2ad2e1799e901df2c82fcf,PodSandboxId:13bf0d3baebf284ab4b93605349470c86fba288f521ecd9883c40f487d6f7b33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727399459869060310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6,PodSandboxId:6a1e90b4cf8b9c3f9eb60a331b1f14d7026cef1bff75bf9443781b2ddd99bee7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727399459791291283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8-0c929d89d590,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3,PodSandboxId:503fa1d74e0911b18f909e527708d2ebca4eb6cbd437b5a9456e3d5701dd9cea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727399455046471842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278,PodSandboxId:b531bae36a1bf8578a8bc5d7206ca96587b484af6b938aaace3e1d42a2347547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727399455015600489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7,PodSandboxId:4d647e0feb8091518237b7fbe48b78a21b0cd13a5b1a7cf1069f1b7b60b3db0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727399454993265849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038,PodSandboxId:f94e47bc342395a3efff38cdee5be15737927cb3ffa2d53bfbeb869e4f9570c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727399454900199216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77f554f46627203d94de73d5fdd23e95e65d9575aa1bd5519baff9e6ce63163,PodSandboxId:9cf826370b3a62bfb1f2e720365061e000e6656ef7f80ea3d1161d74b3122f08,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727399129173910196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7,PodSandboxId:933c6a5d0e0388d49e25f3cf62abf4ef43afa20d2285fb62668d93107c6f6faa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727399069733351203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0,PodSandboxId:308a304457bf517293ceab47f4bd7001ee4f43b6d659149425018a1b18310ec4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727399069660452995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06,PodSandboxId:85632f08d6b3c9b74bb27d1fc77410097bd5b34f4b1fee03ba4b4c3f91c8470d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727399057687956401,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe,PodSandboxId:26219f2ebd932a110874989db718a664ea2ca1866409268e37559be013de7263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727399057312381457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8
-0c929d89d590,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8,PodSandboxId:3fea8ffb6c67f72050256db27271f91a7e3b52acbd079953c21bc1a466c73b32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727399046126300072,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf,PodSandboxId:4edf991954d2ab25760e2e4de26cd96ec31b943311304a7809681f2e843a0e5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727399046146062363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e,PodSandboxId:68566156da1f6ed4467890f036b59837188299cc35020976640e2ae229a6aa72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727399046059128947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1,PodSandboxId:aa8bd46889272035697cb27cebe4c5613b2321e8f1ea5a12167c9b73cab70d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727399046013425347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d191694b-8f2a-4346-8a13-7f6886add449 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.907878983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=deeb3d74-e365-4c92-998b-fb9ddeac5881 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.907968136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=deeb3d74-e365-4c92-998b-fb9ddeac5881 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.908878518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4687c22c-0830-4a88-b2f0-2e3bb9419dfa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.909285113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399703909261345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4687c22c-0830-4a88-b2f0-2e3bb9419dfa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.909847092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54db9df6-d581-4434-98f2-e6e2c3ae9738 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.909908075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54db9df6-d581-4434-98f2-e6e2c3ae9738 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.910710601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33b19bbc348c651a15977ac698195dcfd69096687843e4d5b8273a5279639f7f,PodSandboxId:745e3599198c2331e834fbb32a2305c0cbbede8f443bbc1c9fc60b7e74d32a15,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727399493536289086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1,PodSandboxId:11b08d025386ef2257bdf36d6d621817ab3bc0c25bfc28c03442e5ae0efce54c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727399460120549553,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5,PodSandboxId:fcb963144f5921c3632172f66fdad7d3022c2ed6dba06e15b6313db1572dacb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727399460030293837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02d26889f335cc07145daf774288e41017072373a2ad2e1799e901df2c82fcf,PodSandboxId:13bf0d3baebf284ab4b93605349470c86fba288f521ecd9883c40f487d6f7b33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727399459869060310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6,PodSandboxId:6a1e90b4cf8b9c3f9eb60a331b1f14d7026cef1bff75bf9443781b2ddd99bee7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727399459791291283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8-0c929d89d590,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3,PodSandboxId:503fa1d74e0911b18f909e527708d2ebca4eb6cbd437b5a9456e3d5701dd9cea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727399455046471842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278,PodSandboxId:b531bae36a1bf8578a8bc5d7206ca96587b484af6b938aaace3e1d42a2347547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727399455015600489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7,PodSandboxId:4d647e0feb8091518237b7fbe48b78a21b0cd13a5b1a7cf1069f1b7b60b3db0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727399454993265849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038,PodSandboxId:f94e47bc342395a3efff38cdee5be15737927cb3ffa2d53bfbeb869e4f9570c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727399454900199216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77f554f46627203d94de73d5fdd23e95e65d9575aa1bd5519baff9e6ce63163,PodSandboxId:9cf826370b3a62bfb1f2e720365061e000e6656ef7f80ea3d1161d74b3122f08,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727399129173910196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7,PodSandboxId:933c6a5d0e0388d49e25f3cf62abf4ef43afa20d2285fb62668d93107c6f6faa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727399069733351203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0,PodSandboxId:308a304457bf517293ceab47f4bd7001ee4f43b6d659149425018a1b18310ec4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727399069660452995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06,PodSandboxId:85632f08d6b3c9b74bb27d1fc77410097bd5b34f4b1fee03ba4b4c3f91c8470d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727399057687956401,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe,PodSandboxId:26219f2ebd932a110874989db718a664ea2ca1866409268e37559be013de7263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727399057312381457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8
-0c929d89d590,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8,PodSandboxId:3fea8ffb6c67f72050256db27271f91a7e3b52acbd079953c21bc1a466c73b32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727399046126300072,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf,PodSandboxId:4edf991954d2ab25760e2e4de26cd96ec31b943311304a7809681f2e843a0e5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727399046146062363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e,PodSandboxId:68566156da1f6ed4467890f036b59837188299cc35020976640e2ae229a6aa72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727399046059128947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1,PodSandboxId:aa8bd46889272035697cb27cebe4c5613b2321e8f1ea5a12167c9b73cab70d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727399046013425347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54db9df6-d581-4434-98f2-e6e2c3ae9738 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.954848969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5cb9193-3215-4d8b-aec8-b292710f9a46 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.954944792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5cb9193-3215-4d8b-aec8-b292710f9a46 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.956523673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6de50915-863e-4b10-a8b7-ac2a6d3dea87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.957168842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399703957143687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6de50915-863e-4b10-a8b7-ac2a6d3dea87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.957681794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bdc1336-72cd-47da-9674-f0960cac5dc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.957744787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bdc1336-72cd-47da-9674-f0960cac5dc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:03 multinode-833343 crio[2712]: time="2024-09-27 01:15:03.958127528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33b19bbc348c651a15977ac698195dcfd69096687843e4d5b8273a5279639f7f,PodSandboxId:745e3599198c2331e834fbb32a2305c0cbbede8f443bbc1c9fc60b7e74d32a15,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727399493536289086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1,PodSandboxId:11b08d025386ef2257bdf36d6d621817ab3bc0c25bfc28c03442e5ae0efce54c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727399460120549553,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5,PodSandboxId:fcb963144f5921c3632172f66fdad7d3022c2ed6dba06e15b6313db1572dacb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727399460030293837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02d26889f335cc07145daf774288e41017072373a2ad2e1799e901df2c82fcf,PodSandboxId:13bf0d3baebf284ab4b93605349470c86fba288f521ecd9883c40f487d6f7b33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727399459869060310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6,PodSandboxId:6a1e90b4cf8b9c3f9eb60a331b1f14d7026cef1bff75bf9443781b2ddd99bee7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727399459791291283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8-0c929d89d590,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3,PodSandboxId:503fa1d74e0911b18f909e527708d2ebca4eb6cbd437b5a9456e3d5701dd9cea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727399455046471842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278,PodSandboxId:b531bae36a1bf8578a8bc5d7206ca96587b484af6b938aaace3e1d42a2347547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727399455015600489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7,PodSandboxId:4d647e0feb8091518237b7fbe48b78a21b0cd13a5b1a7cf1069f1b7b60b3db0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727399454993265849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038,PodSandboxId:f94e47bc342395a3efff38cdee5be15737927cb3ffa2d53bfbeb869e4f9570c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727399454900199216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77f554f46627203d94de73d5fdd23e95e65d9575aa1bd5519baff9e6ce63163,PodSandboxId:9cf826370b3a62bfb1f2e720365061e000e6656ef7f80ea3d1161d74b3122f08,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727399129173910196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7,PodSandboxId:933c6a5d0e0388d49e25f3cf62abf4ef43afa20d2285fb62668d93107c6f6faa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727399069733351203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0,PodSandboxId:308a304457bf517293ceab47f4bd7001ee4f43b6d659149425018a1b18310ec4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727399069660452995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06,PodSandboxId:85632f08d6b3c9b74bb27d1fc77410097bd5b34f4b1fee03ba4b4c3f91c8470d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727399057687956401,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe,PodSandboxId:26219f2ebd932a110874989db718a664ea2ca1866409268e37559be013de7263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727399057312381457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8
-0c929d89d590,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8,PodSandboxId:3fea8ffb6c67f72050256db27271f91a7e3b52acbd079953c21bc1a466c73b32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727399046126300072,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf,PodSandboxId:4edf991954d2ab25760e2e4de26cd96ec31b943311304a7809681f2e843a0e5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727399046146062363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e,PodSandboxId:68566156da1f6ed4467890f036b59837188299cc35020976640e2ae229a6aa72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727399046059128947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1,PodSandboxId:aa8bd46889272035697cb27cebe4c5613b2321e8f1ea5a12167c9b73cab70d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727399046013425347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bdc1336-72cd-47da-9674-f0960cac5dc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:04 multinode-833343 crio[2712]: time="2024-09-27 01:15:04.001840994Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ea541e6-72ec-401d-8e82-dda24b1e1a93 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:15:04 multinode-833343 crio[2712]: time="2024-09-27 01:15:04.001923361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ea541e6-72ec-401d-8e82-dda24b1e1a93 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:15:04 multinode-833343 crio[2712]: time="2024-09-27 01:15:04.003209977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4dd4fbe7-1214-408f-9b75-98c446f5b97b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:15:04 multinode-833343 crio[2712]: time="2024-09-27 01:15:04.003605203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399704003583841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4dd4fbe7-1214-408f-9b75-98c446f5b97b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:15:04 multinode-833343 crio[2712]: time="2024-09-27 01:15:04.004187917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=742ad16a-4b43-457a-a708-5bace859c97a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:04 multinode-833343 crio[2712]: time="2024-09-27 01:15:04.004274183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=742ad16a-4b43-457a-a708-5bace859c97a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:15:04 multinode-833343 crio[2712]: time="2024-09-27 01:15:04.004627258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33b19bbc348c651a15977ac698195dcfd69096687843e4d5b8273a5279639f7f,PodSandboxId:745e3599198c2331e834fbb32a2305c0cbbede8f443bbc1c9fc60b7e74d32a15,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727399493536289086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1,PodSandboxId:11b08d025386ef2257bdf36d6d621817ab3bc0c25bfc28c03442e5ae0efce54c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727399460120549553,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5,PodSandboxId:fcb963144f5921c3632172f66fdad7d3022c2ed6dba06e15b6313db1572dacb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727399460030293837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02d26889f335cc07145daf774288e41017072373a2ad2e1799e901df2c82fcf,PodSandboxId:13bf0d3baebf284ab4b93605349470c86fba288f521ecd9883c40f487d6f7b33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727399459869060310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6,PodSandboxId:6a1e90b4cf8b9c3f9eb60a331b1f14d7026cef1bff75bf9443781b2ddd99bee7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727399459791291283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8-0c929d89d590,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3,PodSandboxId:503fa1d74e0911b18f909e527708d2ebca4eb6cbd437b5a9456e3d5701dd9cea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727399455046471842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278,PodSandboxId:b531bae36a1bf8578a8bc5d7206ca96587b484af6b938aaace3e1d42a2347547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727399455015600489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7,PodSandboxId:4d647e0feb8091518237b7fbe48b78a21b0cd13a5b1a7cf1069f1b7b60b3db0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727399454993265849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038,PodSandboxId:f94e47bc342395a3efff38cdee5be15737927cb3ffa2d53bfbeb869e4f9570c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727399454900199216,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77f554f46627203d94de73d5fdd23e95e65d9575aa1bd5519baff9e6ce63163,PodSandboxId:9cf826370b3a62bfb1f2e720365061e000e6656ef7f80ea3d1161d74b3122f08,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727399129173910196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-cv7gx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 223ec194-be67-4fa7-8e79-b95dde6445d6,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7,PodSandboxId:933c6a5d0e0388d49e25f3cf62abf4ef43afa20d2285fb62668d93107c6f6faa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727399069733351203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fxjdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc4fa771-d252-4cab-8206-4010e499b130,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02c5e4faf57e0e9a5ccc48f45ab304011b04405d198dc5bc85a74269b04fcdc0,PodSandboxId:308a304457bf517293ceab47f4bd7001ee4f43b6d659149425018a1b18310ec4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727399069660452995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a2eaefd3-2123-42a2-ad32-13c6a93282cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06,PodSandboxId:85632f08d6b3c9b74bb27d1fc77410097bd5b34f4b1fee03ba4b4c3f91c8470d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727399057687956401,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qjx9d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2461ab02-e830-4e85-8541-651c97525d07,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe,PodSandboxId:26219f2ebd932a110874989db718a664ea2ca1866409268e37559be013de7263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727399057312381457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547aba0f-3d4d-4cf6-91d8
-0c929d89d590,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8,PodSandboxId:3fea8ffb6c67f72050256db27271f91a7e3b52acbd079953c21bc1a466c73b32,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727399046126300072,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ade46804942c83724776b400c5c92f0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf,PodSandboxId:4edf991954d2ab25760e2e4de26cd96ec31b943311304a7809681f2e843a0e5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727399046146062363,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870164161d7a14e3a59d5796b0f3f3db,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e,PodSandboxId:68566156da1f6ed4467890f036b59837188299cc35020976640e2ae229a6aa72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727399046059128947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0474e4d2378d81a79219e607059f81,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1,PodSandboxId:aa8bd46889272035697cb27cebe4c5613b2321e8f1ea5a12167c9b73cab70d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727399046013425347,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-833343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d098f70decb9e39093456e1084cfef79,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=742ad16a-4b43-457a-a708-5bace859c97a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33b19bbc348c6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   745e3599198c2       busybox-7dff88458-cv7gx
	1dcd36b671b63       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   11b08d025386e       kindnet-qjx9d
	b9d4cfadfab2b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   fcb963144f592       coredns-7c65d6cfc9-fxjdg
	d02d26889f335       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   13bf0d3baebf2       storage-provisioner
	9a11073b6bcce       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   6a1e90b4cf8b9       kube-proxy-5kxx5
	e03bbbc7bc9d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   503fa1d74e091       etcd-multinode-833343
	972e11adbd7e1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   b531bae36a1bf       kube-controller-manager-multinode-833343
	de2f589dec797       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   4d647e0feb809       kube-apiserver-multinode-833343
	343fd95487e49       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   f94e47bc34239       kube-scheduler-multinode-833343
	b77f554f46627       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   9cf826370b3a6       busybox-7dff88458-cv7gx
	3379d1c82431b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   933c6a5d0e038       coredns-7c65d6cfc9-fxjdg
	02c5e4faf57e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   308a304457bf5       storage-provisioner
	9de6deb0a88fa       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   85632f08d6b3c       kindnet-qjx9d
	51a77d274b9ce       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   26219f2ebd932       kube-proxy-5kxx5
	e8d19f9308bbc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   4edf991954d2a       etcd-multinode-833343
	15018f9c92547       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   3fea8ffb6c67f       kube-scheduler-multinode-833343
	a9182a2399489       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   68566156da1f6       kube-apiserver-multinode-833343
	0a3e4bfb234ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   aa8bd46889272       kube-controller-manager-multinode-833343
	
	
	==> coredns [3379d1c82431bb6880da5f7d200fd5033e3cfb0d51aad66dc910808404d154e7] <==
	[INFO] 10.244.0.3:44200 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001609341s
	[INFO] 10.244.0.3:46637 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000795s
	[INFO] 10.244.0.3:56671 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057136s
	[INFO] 10.244.0.3:58596 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000941758s
	[INFO] 10.244.0.3:46449 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072682s
	[INFO] 10.244.0.3:33359 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062244s
	[INFO] 10.244.0.3:51304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056318s
	[INFO] 10.244.1.2:41603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185221s
	[INFO] 10.244.1.2:48232 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010224s
	[INFO] 10.244.1.2:51298 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089399s
	[INFO] 10.244.1.2:46911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000903s
	[INFO] 10.244.0.3:57075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096s
	[INFO] 10.244.0.3:41622 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078699s
	[INFO] 10.244.0.3:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000052234s
	[INFO] 10.244.0.3:41626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041631s
	[INFO] 10.244.1.2:39521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144383s
	[INFO] 10.244.1.2:42275 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193497s
	[INFO] 10.244.1.2:50197 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012607s
	[INFO] 10.244.1.2:54228 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095643s
	[INFO] 10.244.0.3:53946 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127789s
	[INFO] 10.244.0.3:33056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109054s
	[INFO] 10.244.0.3:38337 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115039s
	[INFO] 10.244.0.3:41413 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092911s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b9d4cfadfab2b913bab0d55b8c808e9b6ca83e86da87ab69fea8903986beb4c5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46722 - 34863 "HINFO IN 4331348365642039683.9201062156548862189. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026118013s
	
	
	==> describe nodes <==
	Name:               multinode-833343
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-833343
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=multinode-833343
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_04_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:04:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-833343
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:15:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:10:58 +0000   Fri, 27 Sep 2024 01:04:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:10:58 +0000   Fri, 27 Sep 2024 01:04:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:10:58 +0000   Fri, 27 Sep 2024 01:04:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:10:58 +0000   Fri, 27 Sep 2024 01:04:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    multinode-833343
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aac41b37ee244db2a333991a2a9f4ee1
	  System UUID:                aac41b37-ee24-4db2-a333-991a2a9f4ee1
	  Boot ID:                    6577b725-ffc5-4292-b709-1f44478ec6e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cv7gx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 coredns-7c65d6cfc9-fxjdg                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-833343                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-qjx9d                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-833343             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-833343    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-5kxx5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-833343             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-833343 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-833343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-833343 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-833343 event: Registered Node multinode-833343 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-833343 status is now: NodeReady
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node multinode-833343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node multinode-833343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node multinode-833343 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node multinode-833343 event: Registered Node multinode-833343 in Controller
	
	
	Name:               multinode-833343-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-833343-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=multinode-833343
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T01_11_37_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:11:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-833343-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:12:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 01:12:07 +0000   Fri, 27 Sep 2024 01:13:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 01:12:07 +0000   Fri, 27 Sep 2024 01:13:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 01:12:07 +0000   Fri, 27 Sep 2024 01:13:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 01:12:07 +0000   Fri, 27 Sep 2024 01:13:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    multinode-833343-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03999963ab6d474d86d48ee8404b9f12
	  System UUID:                03999963-ab6d-474d-86d4-8ee8404b9f12
	  Boot ID:                    b3976b1c-595a-4e7e-b7db-5ec396aed414
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p7vbc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kindnet-mvzbh              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-97gll           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m23s                  kube-proxy       
	  Normal  Starting                 9m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-833343-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-833343-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-833343-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m41s                  kubelet          Node multinode-833343-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m28s (x2 over 3m28s)  kubelet          Node multinode-833343-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s (x2 over 3m28s)  kubelet          Node multinode-833343-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s (x2 over 3m28s)  kubelet          Node multinode-833343-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-833343-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-833343-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060082] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.168791] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.140370] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.280641] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.940663] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[Sep27 01:04] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.062136] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.509098] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.090756] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.582584] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +1.162317] kauditd_printk_skb: 43 callbacks suppressed
	[ +12.305304] kauditd_printk_skb: 38 callbacks suppressed
	[Sep27 01:05] kauditd_printk_skb: 14 callbacks suppressed
	[Sep27 01:10] systemd-fstab-generator[2635]: Ignoring "noauto" option for root device
	[  +0.141473] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.169742] systemd-fstab-generator[2662]: Ignoring "noauto" option for root device
	[  +0.157394] systemd-fstab-generator[2674]: Ignoring "noauto" option for root device
	[  +0.285752] systemd-fstab-generator[2702]: Ignoring "noauto" option for root device
	[  +4.975176] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.085291] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.539849] systemd-fstab-generator[2920]: Ignoring "noauto" option for root device
	[  +5.683796] kauditd_printk_skb: 74 callbacks suppressed
	[Sep27 01:11] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.357912] systemd-fstab-generator[3757]: Ignoring "noauto" option for root device
	[ +21.391013] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [e03bbbc7bc9d0eca3cd9c95295ea0e21c133323a0285390ad17665c61c0997a3] <==
	{"level":"info","ts":"2024-09-27T01:10:55.718533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 switched to configuration voters=(2944666324747433461)"}
	{"level":"info","ts":"2024-09-27T01:10:55.718614Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","added-peer-id":"28dd8e6bbca035f5","added-peer-peer-urls":["https://192.168.39.203:2380"]}
	{"level":"info","ts":"2024-09-27T01:10:55.718729Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:10:55.718826Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:10:55.725053Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T01:10:55.725335Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T01:10:55.725376Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T01:10:55.725516Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-27T01:10:55.725541Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-27T01:10:56.964378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T01:10:56.964428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T01:10:56.964476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-09-27T01:10:56.964495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T01:10:56.964501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-09-27T01:10:56.964515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T01:10:56.964523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-09-27T01:10:56.970057Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:multinode-833343 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:10:56.970133Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:10:56.970388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:10:56.970866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:10:56.970949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:10:56.971849Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:10:56.972371Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:10:56.972823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	{"level":"info","ts":"2024-09-27T01:10:56.974049Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e8d19f9308bbcc24c8affd654a48314a6d36cf341176d734d49d3c07f2765ebf] <==
	{"level":"info","ts":"2024-09-27T01:04:07.359364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:04:07.361592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-27T01:05:06.180644Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.349248ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T01:05:06.181531Z","caller":"traceutil/trace.go:171","msg":"trace[1417266234] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:463; }","duration":"255.353084ms","start":"2024-09-27T01:05:05.926170Z","end":"2024-09-27T01:05:06.181523Z","steps":["trace[1417266234] 'range keys from in-memory index tree'  (duration: 239.218259ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T01:05:06.180573Z","caller":"traceutil/trace.go:171","msg":"trace[261896963] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:483; }","duration":"150.197465ms","start":"2024-09-27T01:05:06.030342Z","end":"2024-09-27T01:05:06.180539Z","steps":["trace[261896963] 'read index received'  (duration: 149.994136ms)","trace[261896963] 'applied index is now lower than readState.Index'  (duration: 202.747µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T01:05:06.181475Z","caller":"traceutil/trace.go:171","msg":"trace[1196070712] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"250.300115ms","start":"2024-09-27T01:05:05.931152Z","end":"2024-09-27T01:05:06.181452Z","steps":["trace[1196070712] 'process raft request'  (duration: 249.231928ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T01:05:06.261451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.976739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-09-27T01:05:06.261640Z","caller":"traceutil/trace.go:171","msg":"trace[1794335798] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:464; }","duration":"212.183711ms","start":"2024-09-27T01:05:06.049437Z","end":"2024-09-27T01:05:06.261621Z","steps":["trace[1794335798] 'agreement among raft nodes before linearized reading'  (duration: 211.921091ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T01:05:06.261452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.035468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-833343-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-27T01:05:06.261845Z","caller":"traceutil/trace.go:171","msg":"trace[412796641] range","detail":"{range_begin:/registry/minions/multinode-833343-m02; range_end:; response_count:1; response_revision:464; }","duration":"165.438773ms","start":"2024-09-27T01:05:06.096393Z","end":"2024-09-27T01:05:06.261831Z","steps":["trace[412796641] 'agreement among raft nodes before linearized reading'  (duration: 165.010487ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T01:05:58.742742Z","caller":"traceutil/trace.go:171","msg":"trace[572252345] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:599; }","duration":"163.646673ms","start":"2024-09-27T01:05:58.579071Z","end":"2024-09-27T01:05:58.742718Z","steps":["trace[572252345] 'read index received'  (duration: 86.459301ms)","trace[572252345] 'applied index is now lower than readState.Index'  (duration: 77.186854ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T01:05:58.742917Z","caller":"traceutil/trace.go:171","msg":"trace[1696719071] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"202.617876ms","start":"2024-09-27T01:05:58.540291Z","end":"2024-09-27T01:05:58.742909Z","steps":["trace[1696719071] 'process raft request'  (duration: 125.231042ms)","trace[1696719071] 'compare'  (duration: 77.063264ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T01:05:58.743169Z","caller":"traceutil/trace.go:171","msg":"trace[1870578884] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"200.116959ms","start":"2024-09-27T01:05:58.543045Z","end":"2024-09-27T01:05:58.743162Z","steps":["trace[1870578884] 'process raft request'  (duration: 199.635893ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T01:05:58.743328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.233806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T01:05:58.743376Z","caller":"traceutil/trace.go:171","msg":"trace[1599516607] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:569; }","duration":"164.302101ms","start":"2024-09-27T01:05:58.579066Z","end":"2024-09-27T01:05:58.743368Z","steps":["trace[1599516607] 'agreement among raft nodes before linearized reading'  (duration: 164.220097ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T01:09:15.331531Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-27T01:09:15.331636Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-833343","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	{"level":"warn","ts":"2024-09-27T01:09:15.337072Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:09:15.337183Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:09:15.388176Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:09:15.388254Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T01:09:15.388334Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"28dd8e6bbca035f5","current-leader-member-id":"28dd8e6bbca035f5"}
	{"level":"info","ts":"2024-09-27T01:09:15.390612Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-27T01:09:15.390719Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-27T01:09:15.390747Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-833343","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	
	
	==> kernel <==
	 01:15:04 up 11 min,  0 users,  load average: 0.11, 0.11, 0.09
	Linux multinode-833343 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1dcd36b671b63d1ecfc7cb56fd9e7c9d36f92403f64729bdc762dff2d25501e1] <==
	I0927 01:14:01.162481       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:11.170209       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:14:11.170259       1 main.go:299] handling current node
	I0927 01:14:11.170281       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:14:11.170289       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:21.170297       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:14:21.170455       1 main.go:299] handling current node
	I0927 01:14:21.170501       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:14:21.170520       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:31.166285       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:14:31.166339       1 main.go:299] handling current node
	I0927 01:14:31.166359       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:14:31.166364       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:41.170583       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:14:41.170633       1 main.go:299] handling current node
	I0927 01:14:41.170651       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:14:41.170657       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:51.169844       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:14:51.169990       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:14:51.170121       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:14:51.170143       1 main.go:299] handling current node
	I0927 01:15:01.161903       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:15:01.161983       1 main.go:299] handling current node
	I0927 01:15:01.162009       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:15:01.162015       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [9de6deb0a88fa5b3b6dd6eafc2ab9fb4555f20b3bdd03fcbd26ae4f4a22c9a06] <==
	I0927 01:08:28.663281       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:08:38.654361       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:08:38.654503       1 main.go:299] handling current node
	I0927 01:08:38.654533       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:08:38.654551       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:08:38.654686       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:08:38.654706       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:08:48.658862       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:08:48.658910       1 main.go:299] handling current node
	I0927 01:08:48.658942       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:08:48.658948       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:08:48.659080       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:08:48.659104       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:08:58.656818       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:08:58.656930       1 main.go:299] handling current node
	I0927 01:08:58.656963       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:08:58.656971       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	I0927 01:08:58.657179       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:08:58.657213       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:09:08.654310       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0927 01:09:08.654444       1 main.go:322] Node multinode-833343-m03 has CIDR [10.244.3.0/24] 
	I0927 01:09:08.654632       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0927 01:09:08.654660       1 main.go:299] handling current node
	I0927 01:09:08.654801       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0927 01:09:08.654828       1 main.go:322] Node multinode-833343-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a9182a23994890788fd815a5d96b4084212911e8020da7c535bf8659a4c9343e] <==
	W0927 01:09:15.354815       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.354875       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.354989       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355132       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355190       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355323       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355417       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355525       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0927 01:09:15.355668       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0927 01:09:15.355880       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.355992       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356090       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356148       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356280       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356378       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356486       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356621       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:09:15.356733       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0927 01:09:15.356870       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0927 01:09:15.357254       1 controller.go:157] Shutting down quota evaluator
	I0927 01:09:15.357283       1 controller.go:176] quota evaluator worker shutdown
	I0927 01:09:15.357709       1 controller.go:176] quota evaluator worker shutdown
	I0927 01:09:15.357742       1 controller.go:176] quota evaluator worker shutdown
	I0927 01:09:15.358989       1 controller.go:176] quota evaluator worker shutdown
	I0927 01:09:15.359029       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [de2f589dec797e5330936e8eb4a6bc9564a1554caebee16fd82ea27360e177e7] <==
	I0927 01:10:58.354899       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 01:10:58.355085       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 01:10:58.366076       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 01:10:58.368457       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 01:10:58.368639       1 aggregator.go:171] initial CRD sync complete...
	I0927 01:10:58.368672       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 01:10:58.368695       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 01:10:58.374161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 01:10:58.409256       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 01:10:58.418809       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 01:10:58.423109       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 01:10:58.423150       1 policy_source.go:224] refreshing policies
	E0927 01:10:58.423706       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0927 01:10:58.456138       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 01:10:58.456913       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 01:10:58.469503       1 cache.go:39] Caches are synced for autoregister controller
	I0927 01:10:58.469830       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 01:10:59.269484       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 01:11:00.785429       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 01:11:00.909039       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 01:11:00.919097       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 01:11:01.011659       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 01:11:01.019825       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 01:11:01.779136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 01:11:02.024579       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0a3e4bfb234ad9036b6eb4888da6fae5cc31b141799963a1ad4d1ca4982e70d1] <==
	I0927 01:06:48.622568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:48.623048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:06:49.979172       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-833343-m03\" does not exist"
	I0927 01:06:49.979252       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:06:49.986176       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-833343-m03" podCIDRs=["10.244.3.0/24"]
	I0927 01:06:49.986221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:49.986484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:49.997720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:50.159554       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:50.512715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:06:50.986434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:00.154482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:09.428533       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:07:09.428687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:09.437840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:10.955085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:45.971173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:07:45.972210       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m03"
	I0927 01:07:45.988481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:07:46.019127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.319929ms"
	I0927 01:07:46.019256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.435µs"
	I0927 01:07:51.022623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:51.048287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:07:51.089507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:08:01.168060       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	
	
	==> kube-controller-manager [972e11adbd7e1ca185941afc2f00d0ae997a0871ffcf9d5d5136782549973278] <==
	I0927 01:12:15.493680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:12:15.511931       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-833343-m03" podCIDRs=["10.244.2.0/24"]
	I0927 01:12:15.512148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:15.512266       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:15.778704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:16.114716       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:16.763621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:25.523414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:34.079978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:34.080284       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:12:34.093545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:36.745336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:38.745872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:38.759229       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:12:39.313400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-833343-m02"
	I0927 01:12:39.313704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m03"
	I0927 01:13:21.679290       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7r2gm"
	I0927 01:13:21.712061       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7r2gm"
	I0927 01:13:21.712128       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-6khw5"
	I0927 01:13:21.745436       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-6khw5"
	I0927 01:13:21.769349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:13:21.790158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	I0927 01:13:21.796699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.117065ms"
	I0927 01:13:21.797113       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.93µs"
	I0927 01:13:26.902268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-833343-m02"
	
	
	==> kube-proxy [51a77d274b9ce56df8fc9514cbf7cb259a438f500da3503cfdc2d9764caa2abe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:04:17.487929       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:04:17.496838       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0927 01:04:17.496990       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:04:17.531212       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:04:17.531263       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:04:17.531287       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:04:17.533715       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:04:17.534070       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:04:17.534099       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:04:17.535621       1 config.go:199] "Starting service config controller"
	I0927 01:04:17.535661       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:04:17.535693       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:04:17.535696       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:04:17.536291       1 config.go:328] "Starting node config controller"
	I0927 01:04:17.536319       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:04:17.636159       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:04:17.636176       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:04:17.636488       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9a11073b6bcce8287092dff3277aa628398dc7c379a9fe1009f7d7896aa33dc6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:11:00.191870       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:11:00.245266       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0927 01:11:00.246014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:11:00.415823       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:11:00.415871       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:11:00.415928       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:11:00.422469       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:11:00.422729       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:11:00.422740       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:11:00.428745       1 config.go:199] "Starting service config controller"
	I0927 01:11:00.443848       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:11:00.443365       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:11:00.443920       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:11:00.444263       1 config.go:328] "Starting node config controller"
	I0927 01:11:00.444287       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:11:00.544827       1 shared_informer.go:320] Caches are synced for node config
	I0927 01:11:00.544875       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:11:00.544914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [15018f9c92547a079d4127cb3d77d4cbdd1c8ab51fb731b1db97fed907c807c8] <==
	W0927 01:04:08.935544       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:04:08.935718       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0927 01:04:08.935740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.738277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 01:04:09.738387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.767255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:04:09.767360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.869364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 01:04:09.869488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.870702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:04:09.870746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:09.976735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:04:09.976848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.031665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 01:04:10.031718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.114879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 01:04:10.114939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.228979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 01:04:10.229029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.235543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 01:04:10.235600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:04:10.460293       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:04:10.460388       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0927 01:04:13.328886       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 01:09:15.330270       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [343fd95487e49d0179ba9887b4859a8f1c0b02d052e7c35f6871683bced37038] <==
	I0927 01:10:56.223424       1 serving.go:386] Generated self-signed cert in-memory
	W0927 01:10:58.323187       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 01:10:58.323233       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 01:10:58.323245       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 01:10:58.323256       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 01:10:58.421874       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 01:10:58.422440       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:10:58.430312       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 01:10:58.430539       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 01:10:58.430599       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:10:58.430640       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 01:10:58.535079       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 01:13:54 multinode-833343 kubelet[2927]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:13:54 multinode-833343 kubelet[2927]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:13:54 multinode-833343 kubelet[2927]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:13:54 multinode-833343 kubelet[2927]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:13:54 multinode-833343 kubelet[2927]: E0927 01:13:54.362426    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399634361531460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:13:54 multinode-833343 kubelet[2927]: E0927 01:13:54.362459    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399634361531460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:04 multinode-833343 kubelet[2927]: E0927 01:14:04.364334    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399644363890535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:04 multinode-833343 kubelet[2927]: E0927 01:14:04.364661    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399644363890535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:14 multinode-833343 kubelet[2927]: E0927 01:14:14.370102    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399654369558189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:14 multinode-833343 kubelet[2927]: E0927 01:14:14.370148    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399654369558189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:24 multinode-833343 kubelet[2927]: E0927 01:14:24.371828    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399664371443010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:24 multinode-833343 kubelet[2927]: E0927 01:14:24.371870    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399664371443010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:34 multinode-833343 kubelet[2927]: E0927 01:14:34.373655    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399674373248039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:34 multinode-833343 kubelet[2927]: E0927 01:14:34.374215    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399674373248039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:44 multinode-833343 kubelet[2927]: E0927 01:14:44.376675    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399684375951207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:44 multinode-833343 kubelet[2927]: E0927 01:14:44.376738    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399684375951207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:54 multinode-833343 kubelet[2927]: E0927 01:14:54.339939    2927 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:14:54 multinode-833343 kubelet[2927]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:14:54 multinode-833343 kubelet[2927]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:14:54 multinode-833343 kubelet[2927]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:14:54 multinode-833343 kubelet[2927]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:14:54 multinode-833343 kubelet[2927]: E0927 01:14:54.378988    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399694378385800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:14:54 multinode-833343 kubelet[2927]: E0927 01:14:54.379013    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399694378385800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:15:04 multinode-833343 kubelet[2927]: E0927 01:15:04.381972    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399704380299083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:15:04 multinode-833343 kubelet[2927]: E0927 01:15:04.382000    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727399704380299083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:15:03.603887   53558 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19711-14935/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-833343 -n multinode-833343
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-833343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.81s)

                                                
                                    
x
+
TestPreload (275.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-949963 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0927 01:20:10.487267   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-949963 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m13.084941724s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-949963 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-949963 image pull gcr.io/k8s-minikube/busybox: (3.055333706s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-949963
E0927 01:22:44.313542   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:01.245246   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-949963: exit status 82 (2m0.458160685s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-949963"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-949963 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-09-27 01:23:09.067732814 +0000 UTC m=+4105.253341060
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-949963 -n test-preload-949963
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-949963 -n test-preload-949963: exit status 3 (18.463132697s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:23:27.527650   56864 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host
	E0927 01:23:27.527673   56864 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-949963" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-949963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-949963
--- FAIL: TestPreload (275.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (372.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.826130573s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-637447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-637447" primary control-plane node in "kubernetes-upgrade-637447" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:25:26.720432   57978 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:25:26.720611   57978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:25:26.720624   57978 out.go:358] Setting ErrFile to fd 2...
	I0927 01:25:26.720629   57978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:25:26.720803   57978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:25:26.721352   57978 out.go:352] Setting JSON to false
	I0927 01:25:26.722247   57978 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7672,"bootTime":1727392655,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:25:26.722352   57978 start.go:139] virtualization: kvm guest
	I0927 01:25:26.724584   57978 out.go:177] * [kubernetes-upgrade-637447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:25:26.726307   57978 notify.go:220] Checking for updates...
	I0927 01:25:26.727011   57978 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:25:26.729177   57978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:25:26.730807   57978 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:25:26.733262   57978 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:25:26.734950   57978 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:25:26.736312   57978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:25:26.737636   57978 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:25:26.771768   57978 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 01:25:26.773013   57978 start.go:297] selected driver: kvm2
	I0927 01:25:26.773024   57978 start.go:901] validating driver "kvm2" against <nil>
	I0927 01:25:26.773035   57978 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:25:26.773949   57978 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:25:29.806859   57978 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:25:29.822988   57978 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:25:29.823064   57978 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 01:25:29.823416   57978 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 01:25:29.823447   57978 cni.go:84] Creating CNI manager for ""
	I0927 01:25:29.823503   57978 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:25:29.823517   57978 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 01:25:29.823585   57978 start.go:340] cluster config:
	{Name:kubernetes-upgrade-637447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-637447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:25:29.823684   57978 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:25:29.825963   57978 out.go:177] * Starting "kubernetes-upgrade-637447" primary control-plane node in "kubernetes-upgrade-637447" cluster
	I0927 01:25:29.827017   57978 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:25:29.827055   57978 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0927 01:25:29.827064   57978 cache.go:56] Caching tarball of preloaded images
	I0927 01:25:29.827139   57978 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:25:29.827148   57978 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0927 01:25:29.827468   57978 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/config.json ...
	I0927 01:25:29.827490   57978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/config.json: {Name:mk36717b7d5c89dd3040d4c639563c5a6b10e4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:25:29.827612   57978 start.go:360] acquireMachinesLock for kubernetes-upgrade-637447: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:25:51.800061   57978 start.go:364] duration metric: took 21.972413868s to acquireMachinesLock for "kubernetes-upgrade-637447"
	I0927 01:25:51.800140   57978 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-637447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-637447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:25:51.800257   57978 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 01:25:51.801940   57978 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 01:25:51.802127   57978 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:25:51.802189   57978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:25:51.820480   57978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0927 01:25:51.820901   57978 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:25:51.821503   57978 main.go:141] libmachine: Using API Version  1
	I0927 01:25:51.821526   57978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:25:51.821851   57978 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:25:51.822069   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetMachineName
	I0927 01:25:51.822243   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:25:51.822406   57978 start.go:159] libmachine.API.Create for "kubernetes-upgrade-637447" (driver="kvm2")
	I0927 01:25:51.822435   57978 client.go:168] LocalClient.Create starting
	I0927 01:25:51.822475   57978 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 01:25:51.822504   57978 main.go:141] libmachine: Decoding PEM data...
	I0927 01:25:51.822519   57978 main.go:141] libmachine: Parsing certificate...
	I0927 01:25:51.822563   57978 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 01:25:51.822580   57978 main.go:141] libmachine: Decoding PEM data...
	I0927 01:25:51.822591   57978 main.go:141] libmachine: Parsing certificate...
	I0927 01:25:51.822611   57978 main.go:141] libmachine: Running pre-create checks...
	I0927 01:25:51.822619   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .PreCreateCheck
	I0927 01:25:51.822944   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetConfigRaw
	I0927 01:25:51.823371   57978 main.go:141] libmachine: Creating machine...
	I0927 01:25:51.823384   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .Create
	I0927 01:25:51.823497   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Creating KVM machine...
	I0927 01:25:51.824588   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found existing default KVM network
	I0927 01:25:51.825407   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:51.825243   58356 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e4:06:d7} reservation:<nil>}
	I0927 01:25:51.826122   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:51.826051   58356 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I0927 01:25:51.826167   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | created network xml: 
	I0927 01:25:51.826187   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | <network>
	I0927 01:25:51.826196   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |   <name>mk-kubernetes-upgrade-637447</name>
	I0927 01:25:51.826208   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |   <dns enable='no'/>
	I0927 01:25:51.826214   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |   
	I0927 01:25:51.826222   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0927 01:25:51.826232   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |     <dhcp>
	I0927 01:25:51.826239   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0927 01:25:51.826248   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |     </dhcp>
	I0927 01:25:51.826257   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |   </ip>
	I0927 01:25:51.826266   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG |   
	I0927 01:25:51.826276   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | </network>
	I0927 01:25:51.826287   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | 
	I0927 01:25:51.830722   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | trying to create private KVM network mk-kubernetes-upgrade-637447 192.168.50.0/24...
	I0927 01:25:51.901120   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | private KVM network mk-kubernetes-upgrade-637447 192.168.50.0/24 created
	I0927 01:25:51.901160   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447 ...
	I0927 01:25:51.901174   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:51.901112   58356 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:25:51.901192   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 01:25:51.901294   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 01:25:52.137374   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:52.137243   58356 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa...
	I0927 01:25:52.287359   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:52.287193   58356 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/kubernetes-upgrade-637447.rawdisk...
	I0927 01:25:52.287389   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Writing magic tar header
	I0927 01:25:52.287422   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Writing SSH key tar header
	I0927 01:25:52.287437   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:52.287343   58356 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447 ...
	I0927 01:25:52.287457   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447
	I0927 01:25:52.287484   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 01:25:52.287507   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:25:52.287525   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447 (perms=drwx------)
	I0927 01:25:52.287556   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 01:25:52.287570   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 01:25:52.287583   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 01:25:52.287597   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 01:25:52.287608   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Checking permissions on dir: /home/jenkins
	I0927 01:25:52.287620   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Checking permissions on dir: /home
	I0927 01:25:52.287630   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Skipping /home - not owner
	I0927 01:25:52.287646   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 01:25:52.287664   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 01:25:52.287677   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 01:25:52.287685   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Creating domain...
	I0927 01:25:52.288829   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) define libvirt domain using xml: 
	I0927 01:25:52.288889   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) <domain type='kvm'>
	I0927 01:25:52.288901   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   <name>kubernetes-upgrade-637447</name>
	I0927 01:25:52.288920   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   <memory unit='MiB'>2200</memory>
	I0927 01:25:52.288929   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   <vcpu>2</vcpu>
	I0927 01:25:52.288938   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   <features>
	I0927 01:25:52.288945   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <acpi/>
	I0927 01:25:52.288954   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <apic/>
	I0927 01:25:52.288966   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <pae/>
	I0927 01:25:52.288978   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     
	I0927 01:25:52.288989   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   </features>
	I0927 01:25:52.289003   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   <cpu mode='host-passthrough'>
	I0927 01:25:52.289011   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   
	I0927 01:25:52.289020   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   </cpu>
	I0927 01:25:52.289028   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   <os>
	I0927 01:25:52.289038   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <type>hvm</type>
	I0927 01:25:52.289045   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <boot dev='cdrom'/>
	I0927 01:25:52.289055   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <boot dev='hd'/>
	I0927 01:25:52.289066   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <bootmenu enable='no'/>
	I0927 01:25:52.289079   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   </os>
	I0927 01:25:52.289096   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   <devices>
	I0927 01:25:52.289107   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <disk type='file' device='cdrom'>
	I0927 01:25:52.289124   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/boot2docker.iso'/>
	I0927 01:25:52.289142   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <target dev='hdc' bus='scsi'/>
	I0927 01:25:52.289173   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <readonly/>
	I0927 01:25:52.289203   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     </disk>
	I0927 01:25:52.289214   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <disk type='file' device='disk'>
	I0927 01:25:52.289224   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 01:25:52.289242   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/kubernetes-upgrade-637447.rawdisk'/>
	I0927 01:25:52.289253   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <target dev='hda' bus='virtio'/>
	I0927 01:25:52.289265   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     </disk>
	I0927 01:25:52.289281   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <interface type='network'>
	I0927 01:25:52.289295   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <source network='mk-kubernetes-upgrade-637447'/>
	I0927 01:25:52.289306   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <model type='virtio'/>
	I0927 01:25:52.289319   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     </interface>
	I0927 01:25:52.289330   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <interface type='network'>
	I0927 01:25:52.289343   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <source network='default'/>
	I0927 01:25:52.289357   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <model type='virtio'/>
	I0927 01:25:52.289369   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     </interface>
	I0927 01:25:52.289379   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <serial type='pty'>
	I0927 01:25:52.289389   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <target port='0'/>
	I0927 01:25:52.289404   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     </serial>
	I0927 01:25:52.289418   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <console type='pty'>
	I0927 01:25:52.289432   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <target type='serial' port='0'/>
	I0927 01:25:52.289444   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     </console>
	I0927 01:25:52.289455   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     <rng model='virtio'>
	I0927 01:25:52.289476   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)       <backend model='random'>/dev/random</backend>
	I0927 01:25:52.289485   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     </rng>
	I0927 01:25:52.289513   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     
	I0927 01:25:52.289531   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)     
	I0927 01:25:52.289538   57978 main.go:141] libmachine: (kubernetes-upgrade-637447)   </devices>
	I0927 01:25:52.289545   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) </domain>
	I0927 01:25:52.289552   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) 
	I0927 01:25:52.293395   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:a4:01:ef in network default
	I0927 01:25:52.294007   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Ensuring networks are active...
	I0927 01:25:52.294030   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:52.294790   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Ensuring network default is active
	I0927 01:25:52.295187   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Ensuring network mk-kubernetes-upgrade-637447 is active
	I0927 01:25:52.295721   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Getting domain xml...
	I0927 01:25:52.296428   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Creating domain...
	I0927 01:25:53.580033   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Waiting to get IP...
	I0927 01:25:53.581086   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:53.581566   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:53.581595   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:53.581551   58356 retry.go:31] will retry after 205.431146ms: waiting for machine to come up
	I0927 01:25:53.789192   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:53.789763   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:53.789796   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:53.789718   58356 retry.go:31] will retry after 277.994683ms: waiting for machine to come up
	I0927 01:25:54.069389   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:54.069865   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:54.069887   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:54.069820   58356 retry.go:31] will retry after 321.957427ms: waiting for machine to come up
	I0927 01:25:54.393114   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:54.393774   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:54.393805   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:54.393729   58356 retry.go:31] will retry after 573.292306ms: waiting for machine to come up
	I0927 01:25:54.968363   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:54.968732   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:54.968756   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:54.968711   58356 retry.go:31] will retry after 643.073237ms: waiting for machine to come up
	I0927 01:25:55.612971   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:55.613585   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:55.613618   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:55.613500   58356 retry.go:31] will retry after 810.575454ms: waiting for machine to come up
	I0927 01:25:56.425501   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:56.426061   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:56.426101   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:56.425995   58356 retry.go:31] will retry after 844.014323ms: waiting for machine to come up
	I0927 01:25:57.272076   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:57.272598   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:57.272626   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:57.272540   58356 retry.go:31] will retry after 1.197478444s: waiting for machine to come up
	I0927 01:25:58.471145   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:58.471581   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:58.471602   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:58.471530   58356 retry.go:31] will retry after 1.275695865s: waiting for machine to come up
	I0927 01:25:59.748338   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:25:59.748737   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:25:59.748765   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:25:59.748692   58356 retry.go:31] will retry after 2.025531899s: waiting for machine to come up
	I0927 01:26:01.776556   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:01.777089   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:26:01.777121   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:26:01.777044   58356 retry.go:31] will retry after 2.800755292s: waiting for machine to come up
	I0927 01:26:04.580628   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:04.581020   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:26:04.581049   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:26:04.580969   58356 retry.go:31] will retry after 2.618784101s: waiting for machine to come up
	I0927 01:26:07.201512   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:07.202040   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:26:07.202066   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:26:07.201981   58356 retry.go:31] will retry after 3.544892961s: waiting for machine to come up
	I0927 01:26:10.748237   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:10.748612   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find current IP address of domain kubernetes-upgrade-637447 in network mk-kubernetes-upgrade-637447
	I0927 01:26:10.748666   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | I0927 01:26:10.748602   58356 retry.go:31] will retry after 4.331792447s: waiting for machine to come up
	I0927 01:26:15.084487   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.084961   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Found IP for machine: 192.168.50.182
	I0927 01:26:15.084988   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has current primary IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.084997   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Reserving static IP address...
	I0927 01:26:15.085370   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-637447", mac: "52:54:00:65:88:c9", ip: "192.168.50.182"} in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.158842   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Reserved static IP address: 192.168.50.182
	I0927 01:26:15.158872   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Getting to WaitForSSH function...
	I0927 01:26:15.158881   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Waiting for SSH to be available...
	I0927 01:26:15.161665   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.162114   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:15.162146   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.162328   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Using SSH client type: external
	I0927 01:26:15.162354   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa (-rw-------)
	I0927 01:26:15.162384   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:26:15.162394   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | About to run SSH command:
	I0927 01:26:15.162404   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | exit 0
	I0927 01:26:15.287389   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | SSH cmd err, output: <nil>: 
	I0927 01:26:15.287696   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) KVM machine creation complete!
	I0927 01:26:15.288012   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetConfigRaw
	I0927 01:26:15.288582   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:26:15.288801   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:26:15.288958   57978 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 01:26:15.288974   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetState
	I0927 01:26:15.290374   57978 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 01:26:15.290390   57978 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 01:26:15.290398   57978 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 01:26:15.290406   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:15.292672   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.293018   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:15.293045   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.293191   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:15.293345   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.293475   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.293589   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:15.293748   57978 main.go:141] libmachine: Using SSH client type: native
	I0927 01:26:15.293951   57978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0927 01:26:15.293965   57978 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 01:26:15.407558   57978 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:26:15.407584   57978 main.go:141] libmachine: Detecting the provisioner...
	I0927 01:26:15.407595   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:15.410410   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.410787   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:15.410823   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.411007   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:15.411218   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.411421   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.411666   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:15.411854   57978 main.go:141] libmachine: Using SSH client type: native
	I0927 01:26:15.412041   57978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0927 01:26:15.412054   57978 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 01:26:15.516132   57978 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 01:26:15.516233   57978 main.go:141] libmachine: found compatible host: buildroot
	I0927 01:26:15.516247   57978 main.go:141] libmachine: Provisioning with buildroot...
	I0927 01:26:15.516263   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetMachineName
	I0927 01:26:15.516549   57978 buildroot.go:166] provisioning hostname "kubernetes-upgrade-637447"
	I0927 01:26:15.516581   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetMachineName
	I0927 01:26:15.516778   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:15.519806   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.520190   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:15.520215   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.520432   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:15.520650   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.520834   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.520968   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:15.521163   57978 main.go:141] libmachine: Using SSH client type: native
	I0927 01:26:15.521372   57978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0927 01:26:15.521393   57978 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-637447 && echo "kubernetes-upgrade-637447" | sudo tee /etc/hostname
	I0927 01:26:15.644000   57978 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-637447
	
	I0927 01:26:15.644030   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:15.646908   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.647316   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:15.647359   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.647587   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:15.647799   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.648000   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.648197   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:15.648394   57978 main.go:141] libmachine: Using SSH client type: native
	I0927 01:26:15.648565   57978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0927 01:26:15.648582   57978 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-637447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-637447/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-637447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:26:15.773920   57978 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:26:15.773953   57978 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:26:15.773979   57978 buildroot.go:174] setting up certificates
	I0927 01:26:15.773993   57978 provision.go:84] configureAuth start
	I0927 01:26:15.774007   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetMachineName
	I0927 01:26:15.774326   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetIP
	I0927 01:26:15.777300   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.777690   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:15.777719   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.777842   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:15.780397   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.780763   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:15.780790   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.780911   57978 provision.go:143] copyHostCerts
	I0927 01:26:15.780971   57978 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:26:15.780984   57978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:26:15.781046   57978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:26:15.781200   57978 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:26:15.781209   57978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:26:15.781241   57978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:26:15.781319   57978 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:26:15.781326   57978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:26:15.781351   57978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:26:15.781406   57978 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-637447 san=[127.0.0.1 192.168.50.182 kubernetes-upgrade-637447 localhost minikube]
	I0927 01:26:15.848224   57978 provision.go:177] copyRemoteCerts
	I0927 01:26:15.848282   57978 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:26:15.848310   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:15.851217   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.851654   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:15.851679   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:15.851866   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:15.852053   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:15.852237   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:15.852397   57978 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa Username:docker}
	I0927 01:26:15.938116   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:26:15.969574   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0927 01:26:15.997114   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 01:26:16.024952   57978 provision.go:87] duration metric: took 250.946443ms to configureAuth
	I0927 01:26:16.024977   57978 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:26:16.025165   57978 config.go:182] Loaded profile config "kubernetes-upgrade-637447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:26:16.025257   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:16.027589   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.027973   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:16.028006   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.028188   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:16.028366   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:16.028573   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:16.028724   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:16.028864   57978 main.go:141] libmachine: Using SSH client type: native
	I0927 01:26:16.029056   57978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0927 01:26:16.029076   57978 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:26:16.252928   57978 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:26:16.252960   57978 main.go:141] libmachine: Checking connection to Docker...
	I0927 01:26:16.252971   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetURL
	I0927 01:26:16.254522   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Using libvirt version 6000000
	I0927 01:26:16.257001   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.257405   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:16.257448   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.257644   57978 main.go:141] libmachine: Docker is up and running!
	I0927 01:26:16.257661   57978 main.go:141] libmachine: Reticulating splines...
	I0927 01:26:16.257670   57978 client.go:171] duration metric: took 24.435227076s to LocalClient.Create
	I0927 01:26:16.257696   57978 start.go:167] duration metric: took 24.435290209s to libmachine.API.Create "kubernetes-upgrade-637447"
	I0927 01:26:16.257706   57978 start.go:293] postStartSetup for "kubernetes-upgrade-637447" (driver="kvm2")
	I0927 01:26:16.257714   57978 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:26:16.257731   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:26:16.257987   57978 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:26:16.258018   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:16.260365   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.260613   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:16.260646   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.260813   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:16.261007   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:16.261187   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:16.261332   57978 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa Username:docker}
	I0927 01:26:16.346003   57978 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:26:16.350487   57978 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:26:16.350514   57978 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:26:16.350578   57978 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:26:16.350672   57978 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:26:16.350799   57978 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:26:16.360893   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:26:16.389302   57978 start.go:296] duration metric: took 131.584771ms for postStartSetup
	I0927 01:26:16.389362   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetConfigRaw
	I0927 01:26:16.389953   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetIP
	I0927 01:26:16.392732   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.393053   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:16.393096   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.393290   57978 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/config.json ...
	I0927 01:26:16.393463   57978 start.go:128] duration metric: took 24.593196165s to createHost
	I0927 01:26:16.393483   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:16.395643   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.396069   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:16.396098   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.396212   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:16.396378   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:16.396517   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:16.396676   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:16.396836   57978 main.go:141] libmachine: Using SSH client type: native
	I0927 01:26:16.397013   57978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0927 01:26:16.397028   57978 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:26:16.499965   57978 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727400376.471974887
	
	I0927 01:26:16.499987   57978 fix.go:216] guest clock: 1727400376.471974887
	I0927 01:26:16.499994   57978 fix.go:229] Guest: 2024-09-27 01:26:16.471974887 +0000 UTC Remote: 2024-09-27 01:26:16.393472569 +0000 UTC m=+49.715331391 (delta=78.502318ms)
	I0927 01:26:16.500038   57978 fix.go:200] guest clock delta is within tolerance: 78.502318ms
	I0927 01:26:16.500042   57978 start.go:83] releasing machines lock for "kubernetes-upgrade-637447", held for 24.699947s
	I0927 01:26:16.500068   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:26:16.500343   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetIP
	I0927 01:26:16.503178   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.503557   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:16.503579   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.503760   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:26:16.504249   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:26:16.504473   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:26:16.504598   57978 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:26:16.504640   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:16.504705   57978 ssh_runner.go:195] Run: cat /version.json
	I0927 01:26:16.504727   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:26:16.507265   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.507709   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:16.507732   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.507753   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.507852   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:16.508010   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:16.508167   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:16.508205   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:16.508233   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:16.508310   57978 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa Username:docker}
	I0927 01:26:16.508405   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:26:16.508554   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:26:16.508708   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:26:16.508871   57978 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa Username:docker}
	I0927 01:26:16.611519   57978 ssh_runner.go:195] Run: systemctl --version
	I0927 01:26:16.618321   57978 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:26:16.784684   57978 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:26:16.791780   57978 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:26:16.791849   57978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:26:16.814174   57978 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:26:16.814202   57978 start.go:495] detecting cgroup driver to use...
	I0927 01:26:16.814268   57978 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:26:16.830795   57978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:26:16.846237   57978 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:26:16.846292   57978 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:26:16.861809   57978 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:26:16.876421   57978 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:26:16.997658   57978 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:26:17.165358   57978 docker.go:233] disabling docker service ...
	I0927 01:26:17.165412   57978 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:26:17.186910   57978 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:26:17.200550   57978 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:26:17.346570   57978 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:26:17.486224   57978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:26:17.500278   57978 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:26:17.519571   57978 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:26:17.519636   57978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:26:17.529868   57978 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:26:17.529930   57978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:26:17.540367   57978 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:26:17.550625   57978 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:26:17.561648   57978 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:26:17.573116   57978 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:26:17.582848   57978 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:26:17.582922   57978 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:26:17.595743   57978 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:26:17.605713   57978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:26:17.739271   57978 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:26:17.834483   57978 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:26:17.834559   57978 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:26:17.839721   57978 start.go:563] Will wait 60s for crictl version
	I0927 01:26:17.839773   57978 ssh_runner.go:195] Run: which crictl
	I0927 01:26:17.843721   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:26:17.890910   57978 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:26:17.890997   57978 ssh_runner.go:195] Run: crio --version
	I0927 01:26:17.921410   57978 ssh_runner.go:195] Run: crio --version
	I0927 01:26:18.030572   57978 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:26:18.072786   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetIP
	I0927 01:26:18.076371   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:18.076764   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:26:07 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:26:18.076793   57978 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:26:18.077035   57978 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 01:26:18.082918   57978 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:26:18.096986   57978 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-637447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-637447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:26:18.097139   57978 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:26:18.097201   57978 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:26:18.133948   57978 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:26:18.134013   57978 ssh_runner.go:195] Run: which lz4
	I0927 01:26:18.138558   57978 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:26:18.142923   57978 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:26:18.142980   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:26:19.911067   57978 crio.go:462] duration metric: took 1.772542351s to copy over tarball
	I0927 01:26:19.911153   57978 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:26:22.578253   57978 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.667072171s)
	I0927 01:26:22.578285   57978 crio.go:469] duration metric: took 2.667184536s to extract the tarball
	I0927 01:26:22.578294   57978 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:26:22.621761   57978 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:26:22.664737   57978 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:26:22.664770   57978 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:26:22.664856   57978 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:26:22.664908   57978 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:26:22.664900   57978 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:26:22.664917   57978 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:26:22.664932   57978 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:26:22.664858   57978 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:26:22.664853   57978 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:26:22.664910   57978 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:26:22.666589   57978 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:26:22.666577   57978 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:26:22.666602   57978 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:26:22.666635   57978 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:26:22.666577   57978 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:26:22.666692   57978 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:26:22.666773   57978 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:26:22.666801   57978 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:26:22.849960   57978 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:26:22.906590   57978 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:26:22.906640   57978 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:26:22.906685   57978 ssh_runner.go:195] Run: which crictl
	I0927 01:26:22.910808   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:26:22.929603   57978 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:26:22.939632   57978 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:26:22.958833   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:26:23.015384   57978 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:26:23.015427   57978 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:26:23.015476   57978 ssh_runner.go:195] Run: which crictl
	I0927 01:26:23.015487   57978 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:26:23.015518   57978 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:26:23.015560   57978 ssh_runner.go:195] Run: which crictl
	I0927 01:26:23.028186   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:26:23.028236   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:26:23.030182   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:26:23.045734   57978 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:26:23.046452   57978 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:26:23.062617   57978 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:26:23.064831   57978 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:26:23.113016   57978 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:26:23.119390   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:26:23.132415   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:26:23.255187   57978 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:26:23.255234   57978 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:26:23.255239   57978 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:26:23.255269   57978 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:26:23.255285   57978 ssh_runner.go:195] Run: which crictl
	I0927 01:26:23.255326   57978 ssh_runner.go:195] Run: which crictl
	I0927 01:26:23.255400   57978 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:26:23.255420   57978 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:26:23.255448   57978 ssh_runner.go:195] Run: which crictl
	I0927 01:26:23.263121   57978 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:26:23.263161   57978 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:26:23.263175   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:26:23.263193   57978 ssh_runner.go:195] Run: which crictl
	I0927 01:26:23.263235   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:26:23.264497   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:26:23.264846   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:26:23.265969   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:26:23.363099   57978 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:26:23.363161   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:26:23.363285   57978 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:26:23.383178   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:26:23.383290   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:26:23.383236   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:26:23.433427   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:26:23.489176   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:26:23.489244   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:26:23.489299   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:26:23.536780   57978 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:26:23.578431   57978 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:26:23.595290   57978 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:26:23.595353   57978 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:26:23.615146   57978 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:26:23.887711   57978 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:26:24.030316   57978 cache_images.go:92] duration metric: took 1.365524521s to LoadCachedImages
	W0927 01:26:24.030408   57978 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0927 01:26:24.030436   57978 kubeadm.go:934] updating node { 192.168.50.182 8443 v1.20.0 crio true true} ...
	I0927 01:26:24.030559   57978 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-637447 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-637447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:26:24.030634   57978 ssh_runner.go:195] Run: crio config
	I0927 01:26:24.083624   57978 cni.go:84] Creating CNI manager for ""
	I0927 01:26:24.083656   57978 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:26:24.083667   57978 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:26:24.083692   57978 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.182 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-637447 NodeName:kubernetes-upgrade-637447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:26:24.083855   57978 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-637447"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:26:24.083914   57978 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:26:24.094559   57978 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:26:24.094629   57978 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:26:24.105126   57978 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0927 01:26:24.122153   57978 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:26:24.139146   57978 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0927 01:26:24.159049   57978 ssh_runner.go:195] Run: grep 192.168.50.182	control-plane.minikube.internal$ /etc/hosts
	I0927 01:26:24.164243   57978 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:26:24.181038   57978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:26:24.303853   57978 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:26:24.321085   57978 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447 for IP: 192.168.50.182
	I0927 01:26:24.321109   57978 certs.go:194] generating shared ca certs ...
	I0927 01:26:24.321128   57978 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:26:24.321329   57978 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:26:24.321384   57978 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:26:24.321399   57978 certs.go:256] generating profile certs ...
	I0927 01:26:24.321475   57978 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.key
	I0927 01:26:24.321495   57978 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.crt with IP's: []
	I0927 01:26:24.553128   57978 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.crt ...
	I0927 01:26:24.553163   57978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.crt: {Name:mk50780bea5de0ba984561af6b7715a351210b9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:26:24.553361   57978 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.key ...
	I0927 01:26:24.553384   57978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.key: {Name:mk11e6ef056a80297ed374d57ab3b9880d5d90db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:26:24.553544   57978 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.key.5edad7fc
	I0927 01:26:24.553574   57978 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.crt.5edad7fc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.182]
	I0927 01:26:24.838712   57978 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.crt.5edad7fc ...
	I0927 01:26:24.838743   57978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.crt.5edad7fc: {Name:mkb5a996b807146674fde872cdf1f94b35191121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:26:24.838922   57978 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.key.5edad7fc ...
	I0927 01:26:24.838939   57978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.key.5edad7fc: {Name:mk56f6264cc3c5be536e81729c7969eeb9b36bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:26:24.839042   57978 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.crt.5edad7fc -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.crt
	I0927 01:26:24.839123   57978 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.key.5edad7fc -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.key
	I0927 01:26:24.839175   57978 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/proxy-client.key
	I0927 01:26:24.839190   57978 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/proxy-client.crt with IP's: []
	I0927 01:26:24.970509   57978 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/proxy-client.crt ...
	I0927 01:26:24.970540   57978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/proxy-client.crt: {Name:mk9fac3adc3ff730b77abeee7cf3b0453f7474a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:26:24.970729   57978 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/proxy-client.key ...
	I0927 01:26:24.970747   57978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/proxy-client.key: {Name:mk7cbb0cc995b9db9a448d024d6d7b5ff606db07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:26:24.970978   57978 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:26:24.971036   57978 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:26:24.971045   57978 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:26:24.971074   57978 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:26:24.971119   57978 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:26:24.971152   57978 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:26:24.971207   57978 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:26:24.971967   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:26:25.000217   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:26:25.027730   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:26:25.054749   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:26:25.081398   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 01:26:25.107202   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:26:25.132782   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:26:25.159685   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:26:25.187624   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:26:25.214164   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:26:25.237824   57978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:26:25.263352   57978 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:26:25.280916   57978 ssh_runner.go:195] Run: openssl version
	I0927 01:26:25.286844   57978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:26:25.299238   57978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:26:25.304366   57978 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:26:25.304435   57978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:26:25.310854   57978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:26:25.322297   57978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:26:25.333401   57978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:26:25.338230   57978 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:26:25.338292   57978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:26:25.344377   57978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:26:25.357926   57978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:26:25.369122   57978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:26:25.374227   57978 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:26:25.374289   57978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:26:25.380265   57978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:26:25.392374   57978 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:26:25.396732   57978 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 01:26:25.396785   57978 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-637447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-637447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:26:25.396850   57978 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:26:25.396889   57978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:26:25.443774   57978 cri.go:89] found id: ""
	I0927 01:26:25.443844   57978 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:26:25.454468   57978 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:26:25.464815   57978 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:26:25.475275   57978 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:26:25.475294   57978 kubeadm.go:157] found existing configuration files:
	
	I0927 01:26:25.475353   57978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:26:25.485773   57978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:26:25.485834   57978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:26:25.495493   57978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:26:25.506368   57978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:26:25.506428   57978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:26:25.516406   57978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:26:25.526119   57978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:26:25.526190   57978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:26:25.536340   57978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:26:25.546157   57978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:26:25.546250   57978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:26:25.557659   57978 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:26:25.674116   57978 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:26:25.674199   57978 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:26:25.825753   57978 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:26:25.825923   57978 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:26:25.826043   57978 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:26:26.028254   57978 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:26:26.117549   57978 out.go:235]   - Generating certificates and keys ...
	I0927 01:26:26.117697   57978 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:26:26.117782   57978 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:26:26.304477   57978 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 01:26:26.546692   57978 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 01:26:26.908062   57978 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 01:26:26.979178   57978 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 01:26:27.182827   57978 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 01:26:27.183035   57978 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-637447 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	I0927 01:26:27.336988   57978 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 01:26:27.337279   57978 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-637447 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	I0927 01:26:27.414030   57978 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 01:26:27.474049   57978 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 01:26:27.587542   57978 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 01:26:27.587952   57978 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:26:27.873187   57978 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:26:28.035604   57978 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:26:28.236737   57978 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:26:28.564904   57978 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:26:28.581536   57978 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:26:28.582483   57978 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:26:28.582574   57978 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:26:28.703497   57978 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:26:28.705283   57978 out.go:235]   - Booting up control plane ...
	I0927 01:26:28.705411   57978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:26:28.712847   57978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:26:28.713899   57978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:26:28.714713   57978 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:26:28.719438   57978 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:27:08.711660   57978 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:27:08.711984   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:27:08.712221   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:27:13.712730   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:27:13.713050   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:27:23.712400   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:27:23.712588   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:27:43.712198   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:27:43.712526   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:28:23.713546   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:28:23.713823   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:28:23.713851   57978 kubeadm.go:310] 
	I0927 01:28:23.713920   57978 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:28:23.713993   57978 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:28:23.714004   57978 kubeadm.go:310] 
	I0927 01:28:23.714049   57978 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:28:23.714120   57978 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:28:23.714234   57978 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:28:23.714247   57978 kubeadm.go:310] 
	I0927 01:28:23.714396   57978 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:28:23.714457   57978 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:28:23.714513   57978 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:28:23.714533   57978 kubeadm.go:310] 
	I0927 01:28:23.714700   57978 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:28:23.714819   57978 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:28:23.714829   57978 kubeadm.go:310] 
	I0927 01:28:23.714986   57978 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:28:23.715113   57978 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:28:23.715205   57978 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:28:23.715377   57978 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:28:23.715411   57978 kubeadm.go:310] 
	I0927 01:28:23.716872   57978 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:28:23.716999   57978 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:28:23.717101   57978 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:28:23.717211   57978 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-637447 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-637447 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-637447 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-637447 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:28:23.717255   57978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:28:25.180428   57978 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.463139104s)
	I0927 01:28:25.180516   57978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:28:25.199098   57978 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:28:25.211199   57978 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:28:25.211225   57978 kubeadm.go:157] found existing configuration files:
	
	I0927 01:28:25.211276   57978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:28:25.222590   57978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:28:25.222647   57978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:28:25.233474   57978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:28:25.243634   57978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:28:25.243700   57978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:28:25.254685   57978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:28:25.265620   57978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:28:25.265702   57978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:28:25.287050   57978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:28:25.298949   57978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:28:25.299050   57978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:28:25.313895   57978 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:28:25.398857   57978 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:28:25.399069   57978 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:28:25.570945   57978 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:28:25.571108   57978 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:28:25.571262   57978 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:28:25.804137   57978 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:28:25.806886   57978 out.go:235]   - Generating certificates and keys ...
	I0927 01:28:25.806996   57978 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:28:25.807078   57978 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:28:25.807193   57978 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:28:25.807353   57978 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:28:25.807466   57978 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:28:25.807547   57978 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:28:25.807629   57978 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:28:25.807713   57978 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:28:25.807810   57978 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:28:25.808255   57978 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:28:25.808372   57978 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:28:25.808450   57978 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:28:25.969349   57978 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:28:26.281112   57978 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:28:26.605835   57978 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:28:26.715534   57978 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:28:26.732349   57978 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:28:26.733731   57978 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:28:26.733796   57978 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:28:26.884840   57978 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:28:27.459406   57978 out.go:235]   - Booting up control plane ...
	I0927 01:28:27.459564   57978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:28:27.459669   57978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:28:27.459768   57978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:28:27.459886   57978 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:28:27.460086   57978 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:29:06.907508   57978 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:29:06.907704   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:29:06.908016   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:29:11.908212   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:29:11.908433   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:29:21.908995   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:29:21.909258   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:29:41.908553   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:29:41.908809   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:30:21.908508   57978 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:30:21.908795   57978 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:30:21.908816   57978 kubeadm.go:310] 
	I0927 01:30:21.908869   57978 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:30:21.908921   57978 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:30:21.908932   57978 kubeadm.go:310] 
	I0927 01:30:21.909016   57978 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:30:21.909089   57978 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:30:21.909232   57978 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:30:21.909246   57978 kubeadm.go:310] 
	I0927 01:30:21.909409   57978 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:30:21.909483   57978 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:30:21.909560   57978 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:30:21.909577   57978 kubeadm.go:310] 
	I0927 01:30:21.909720   57978 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:30:21.909824   57978 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:30:21.909833   57978 kubeadm.go:310] 
	I0927 01:30:21.909947   57978 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:30:21.910059   57978 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:30:21.910153   57978 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:30:21.910255   57978 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:30:21.910265   57978 kubeadm.go:310] 
	I0927 01:30:21.910734   57978 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:30:21.910837   57978 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:30:21.910916   57978 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:30:21.910987   57978 kubeadm.go:394] duration metric: took 3m56.514205979s to StartCluster
	I0927 01:30:21.911034   57978 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:30:21.911097   57978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:30:21.954875   57978 cri.go:89] found id: ""
	I0927 01:30:21.954901   57978 logs.go:276] 0 containers: []
	W0927 01:30:21.954909   57978 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:30:21.954915   57978 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:30:21.954966   57978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:30:21.994305   57978 cri.go:89] found id: ""
	I0927 01:30:21.994328   57978 logs.go:276] 0 containers: []
	W0927 01:30:21.994337   57978 logs.go:278] No container was found matching "etcd"
	I0927 01:30:21.994342   57978 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:30:21.994417   57978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:30:22.028622   57978 cri.go:89] found id: ""
	I0927 01:30:22.028646   57978 logs.go:276] 0 containers: []
	W0927 01:30:22.028654   57978 logs.go:278] No container was found matching "coredns"
	I0927 01:30:22.028660   57978 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:30:22.028707   57978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:30:22.063639   57978 cri.go:89] found id: ""
	I0927 01:30:22.063669   57978 logs.go:276] 0 containers: []
	W0927 01:30:22.063681   57978 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:30:22.063689   57978 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:30:22.063751   57978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:30:22.100145   57978 cri.go:89] found id: ""
	I0927 01:30:22.100182   57978 logs.go:276] 0 containers: []
	W0927 01:30:22.100194   57978 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:30:22.100201   57978 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:30:22.100260   57978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:30:22.135072   57978 cri.go:89] found id: ""
	I0927 01:30:22.135096   57978 logs.go:276] 0 containers: []
	W0927 01:30:22.135104   57978 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:30:22.135110   57978 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:30:22.135155   57978 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:30:22.168938   57978 cri.go:89] found id: ""
	I0927 01:30:22.168970   57978 logs.go:276] 0 containers: []
	W0927 01:30:22.168981   57978 logs.go:278] No container was found matching "kindnet"
	I0927 01:30:22.168992   57978 logs.go:123] Gathering logs for kubelet ...
	I0927 01:30:22.169004   57978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:30:22.219932   57978 logs.go:123] Gathering logs for dmesg ...
	I0927 01:30:22.219965   57978 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:30:22.233682   57978 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:30:22.233708   57978 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:30:22.345341   57978 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:30:22.345365   57978 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:30:22.345385   57978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:30:22.447937   57978 logs.go:123] Gathering logs for container status ...
	I0927 01:30:22.447975   57978 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0927 01:30:22.489606   57978 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:30:22.489679   57978 out.go:270] * 
	* 
	W0927 01:30:22.489742   57978 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:30:22.489758   57978 out.go:270] * 
	* 
	W0927 01:30:22.490558   57978 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:30:22.493562   57978 out.go:201] 
	W0927 01:30:22.494691   57978 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:30:22.494726   57978 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:30:22.494744   57978 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:30:22.496197   57978 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-637447
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-637447: (2.297142648s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-637447 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-637447 status --format={{.Host}}: exit status 7 (73.132321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.880515496s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-637447 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.258038ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-637447] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-637447
	    minikube start -p kubernetes-upgrade-637447 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6374472 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-637447 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-637447 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (32.619162499s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-27 01:31:35.585984977 +0000 UTC m=+4611.771593232
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-637447 -n kubernetes-upgrade-637447
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-637447 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-637447 logs -n 25: (1.621567761s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-782846 sudo cat                            | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo cat                            | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo cat                            | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo docker                         | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-637447                         | kubernetes-upgrade-637447 | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-637447                         | kubernetes-upgrade-637447 | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC | 27 Sep 24 01:31 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo cat                            | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo cat                            | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo cat                            | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo cat                            | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo                                | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo find                           | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-782846 sudo crio                           | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-782846                                     | cilium-782846             | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC | 27 Sep 24 01:31 UTC |
	| start   | -p old-k8s-version-612261                            | old-k8s-version-612261    | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| start   | -p cert-expiration-595331                            | cert-expiration-595331    | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-719096                               | NoKubernetes-719096       | jenkins | v1.34.0 | 27 Sep 24 01:31 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:31:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:31:30.086207   65249 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:31:30.086321   65249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:31:30.086326   65249 out.go:358] Setting ErrFile to fd 2...
	I0927 01:31:30.086332   65249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:31:30.086517   65249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:31:30.087061   65249 out.go:352] Setting JSON to false
	I0927 01:31:30.088002   65249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8035,"bootTime":1727392655,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:31:30.088092   65249 start.go:139] virtualization: kvm guest
	I0927 01:31:30.090072   65249 out.go:177] * [NoKubernetes-719096] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:31:30.091281   65249 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:31:30.091328   65249 notify.go:220] Checking for updates...
	I0927 01:31:30.093448   65249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:31:30.094483   65249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:31:30.095459   65249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:31:30.096528   65249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:31:30.097536   65249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:31:30.099197   65249 config.go:182] Loaded profile config "NoKubernetes-719096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:31:30.099810   65249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:31:30.099880   65249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:31:30.116468   65249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42491
	I0927 01:31:30.116934   65249 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:31:30.117595   65249 main.go:141] libmachine: Using API Version  1
	I0927 01:31:30.117613   65249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:31:30.117997   65249 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:31:30.118171   65249 main.go:141] libmachine: (NoKubernetes-719096) Calling .DriverName
	I0927 01:31:30.118324   65249 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0927 01:31:30.118412   65249 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0927 01:31:30.118424   65249 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:31:30.118851   65249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:31:30.118890   65249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:31:30.134873   65249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0927 01:31:30.135339   65249 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:31:30.135869   65249 main.go:141] libmachine: Using API Version  1
	I0927 01:31:30.135887   65249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:31:30.136231   65249 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:31:30.136434   65249 main.go:141] libmachine: (NoKubernetes-719096) Calling .DriverName
	I0927 01:31:30.173647   65249 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:31:30.174656   65249 start.go:297] selected driver: kvm2
	I0927 01:31:30.174663   65249 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-719096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-719096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:31:30.174784   65249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:31:30.175038   65249 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0927 01:31:30.175099   65249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:31:30.175173   65249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:31:30.190870   65249 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:31:30.191575   65249 cni.go:84] Creating CNI manager for ""
	I0927 01:31:30.191628   65249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:31:30.191642   65249 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0927 01:31:30.191696   65249 start.go:340] cluster config:
	{Name:NoKubernetes-719096 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-719096 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:31:30.191825   65249 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:31:30.193460   65249 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-719096
	I0927 01:31:28.176724   64629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:31:28.676022   64629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:31:28.753026   64629 api_server.go:72] duration metric: took 1.077386957s to wait for apiserver process to appear ...
	I0927 01:31:28.753055   64629 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:31:28.753079   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:28.753834   64629 api_server.go:269] stopped: https://192.168.50.182:8443/healthz: Get "https://192.168.50.182:8443/healthz": dial tcp 192.168.50.182:8443: connect: connection refused
	I0927 01:31:29.253616   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:31.673487   64629 api_server.go:279] https://192.168.50.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:31:31.673522   64629 api_server.go:103] status: https://192.168.50.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:31:31.673540   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:31.680440   64629 api_server.go:279] https://192.168.50.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:31:31.680471   64629 api_server.go:103] status: https://192.168.50.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:31:31.753613   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:31.761007   64629 api_server.go:279] https://192.168.50.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:31:31.761037   64629 api_server.go:103] status: https://192.168.50.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:31:32.253616   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:32.259993   64629 api_server.go:279] https://192.168.50.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:31:32.260016   64629 api_server.go:103] status: https://192.168.50.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:31:32.753581   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:32.758767   64629 api_server.go:279] https://192.168.50.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:31:32.758792   64629 api_server.go:103] status: https://192.168.50.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:31:30.353037   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:30.353665   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:30.353695   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:30.353602   65054 retry.go:31] will retry after 2.427598006s: waiting for machine to come up
	I0927 01:31:32.783016   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:32.783658   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:32.783688   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:32.783600   65054 retry.go:31] will retry after 4.483955853s: waiting for machine to come up
	I0927 01:31:33.253873   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:33.286526   64629 api_server.go:279] https://192.168.50.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:31:33.286572   64629 api_server.go:103] status: https://192.168.50.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:31:33.753217   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:33.776548   64629 api_server.go:279] https://192.168.50.182:8443/healthz returned 200:
	ok
	I0927 01:31:33.807719   64629 api_server.go:141] control plane version: v1.31.1
	I0927 01:31:33.807757   64629 api_server.go:131] duration metric: took 5.054694313s to wait for apiserver health ...
	I0927 01:31:33.807768   64629 cni.go:84] Creating CNI manager for ""
	I0927 01:31:33.807775   64629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:31:33.809431   64629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:31:33.810631   64629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:31:33.833144   64629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:31:33.923681   64629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:31:33.923767   64629 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 01:31:33.923791   64629 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 01:31:33.949867   64629 system_pods.go:59] 8 kube-system pods found
	I0927 01:31:33.949899   64629 system_pods.go:61] "coredns-7c65d6cfc9-27b9x" [d01ebd1b-15ce-4260-9515-32939f083360] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:31:33.949906   64629 system_pods.go:61] "coredns-7c65d6cfc9-gkljs" [f4bb3c0f-501d-4f0f-bed0-72919f1c1546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:31:33.949913   64629 system_pods.go:61] "etcd-kubernetes-upgrade-637447" [d42ee8f5-62d5-4f68-801e-91373b53b42b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:31:33.949922   64629 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-637447" [42c8df35-3610-46b5-b6fc-5bf01f9e076d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:31:33.949933   64629 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-637447" [5456e034-8b99-43c8-80d7-25ce9d5e0eff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:31:33.949942   64629 system_pods.go:61] "kube-proxy-b9fq5" [97b59c48-c99f-4da8-b38f-6957f2ec7333] Running
	I0927 01:31:33.949949   64629 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-637447" [0dce86c0-a9e5-4e28-9dd2-0757645eacbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:31:33.949967   64629 system_pods.go:61] "storage-provisioner" [cd7cde7f-115b-4361-88e9-008582982fa5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 01:31:33.949977   64629 system_pods.go:74] duration metric: took 26.273897ms to wait for pod list to return data ...
	I0927 01:31:33.949984   64629 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:31:33.962637   64629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:31:33.962674   64629 node_conditions.go:123] node cpu capacity is 2
	I0927 01:31:33.962687   64629 node_conditions.go:105] duration metric: took 12.698302ms to run NodePressure ...
	I0927 01:31:33.962711   64629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:31:34.360257   64629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:31:34.372711   64629 ops.go:34] apiserver oom_adj: -16
	I0927 01:31:34.372736   64629 kubeadm.go:597] duration metric: took 8.679646906s to restartPrimaryControlPlane
	I0927 01:31:34.372756   64629 kubeadm.go:394] duration metric: took 8.792742252s to StartCluster
	I0927 01:31:34.372776   64629 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:34.372861   64629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:31:34.373841   64629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:34.374072   64629 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:31:34.374133   64629 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:31:34.374224   64629 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-637447"
	I0927 01:31:34.374246   64629 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-637447"
	W0927 01:31:34.374257   64629 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:31:34.374269   64629 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-637447"
	I0927 01:31:34.374286   64629 host.go:66] Checking if "kubernetes-upgrade-637447" exists ...
	I0927 01:31:34.374302   64629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-637447"
	I0927 01:31:34.374309   64629 config.go:182] Loaded profile config "kubernetes-upgrade-637447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:31:34.374631   64629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:31:34.374681   64629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:31:34.374704   64629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:31:34.374742   64629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:31:34.375625   64629 out.go:177] * Verifying Kubernetes components...
	I0927 01:31:34.376704   64629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:31:34.391354   64629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39379
	I0927 01:31:34.391842   64629 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:31:34.392392   64629 main.go:141] libmachine: Using API Version  1
	I0927 01:31:34.392418   64629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:31:34.392471   64629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
	I0927 01:31:34.392759   64629 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:31:34.392821   64629 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:31:34.393224   64629 main.go:141] libmachine: Using API Version  1
	I0927 01:31:34.393244   64629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:31:34.393331   64629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:31:34.393375   64629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:31:34.393785   64629 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:31:34.394001   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetState
	I0927 01:31:34.396821   64629 kapi.go:59] client config for kubernetes-upgrade-637447: &rest.Config{Host:"https://192.168.50.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.crt", KeyFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.key", CAFile:"/home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f68560), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 01:31:34.397139   64629 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-637447"
	W0927 01:31:34.397164   64629 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:31:34.397193   64629 host.go:66] Checking if "kubernetes-upgrade-637447" exists ...
	I0927 01:31:34.397558   64629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:31:34.397597   64629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:31:34.409762   64629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36819
	I0927 01:31:34.410236   64629 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:31:34.410767   64629 main.go:141] libmachine: Using API Version  1
	I0927 01:31:34.410792   64629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:31:34.411186   64629 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:31:34.411391   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetState
	I0927 01:31:34.412215   64629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0927 01:31:34.412592   64629 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:31:34.413019   64629 main.go:141] libmachine: Using API Version  1
	I0927 01:31:34.413034   64629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:31:34.413140   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:31:34.413499   64629 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:31:34.414101   64629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:31:34.414145   64629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:31:34.415073   64629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:31:30.194503   65249 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0927 01:31:30.754912   65249 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0927 01:31:30.755069   65249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/NoKubernetes-719096/config.json ...
	I0927 01:31:30.755388   65249 start.go:360] acquireMachinesLock for NoKubernetes-719096: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:31:34.416498   64629 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:31:34.416520   64629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:31:34.416540   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:31:34.419363   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:31:34.419884   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:30:36 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:31:34.419907   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:31:34.420232   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:31:34.420415   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:31:34.420561   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:31:34.420714   64629 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa Username:docker}
	I0927 01:31:34.429656   64629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0927 01:31:34.430074   64629 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:31:34.430580   64629 main.go:141] libmachine: Using API Version  1
	I0927 01:31:34.430597   64629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:31:34.431377   64629 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:31:34.431581   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetState
	I0927 01:31:34.433005   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .DriverName
	I0927 01:31:34.433186   64629 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:31:34.433202   64629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:31:34.433219   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHHostname
	I0927 01:31:34.435977   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:31:34.436348   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:88:c9", ip: ""} in network mk-kubernetes-upgrade-637447: {Iface:virbr2 ExpiryTime:2024-09-27 02:30:36 +0000 UTC Type:0 Mac:52:54:00:65:88:c9 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-637447 Clientid:01:52:54:00:65:88:c9}
	I0927 01:31:34.436374   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | domain kubernetes-upgrade-637447 has defined IP address 192.168.50.182 and MAC address 52:54:00:65:88:c9 in network mk-kubernetes-upgrade-637447
	I0927 01:31:34.436565   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHPort
	I0927 01:31:34.436712   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHKeyPath
	I0927 01:31:34.436884   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .GetSSHUsername
	I0927 01:31:34.437089   64629 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kubernetes-upgrade-637447/id_rsa Username:docker}
	I0927 01:31:34.571775   64629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:31:34.595622   64629 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:31:34.595718   64629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:31:34.610822   64629 api_server.go:72] duration metric: took 236.714801ms to wait for apiserver process to appear ...
	I0927 01:31:34.610849   64629 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:31:34.610871   64629 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I0927 01:31:34.615997   64629 api_server.go:279] https://192.168.50.182:8443/healthz returned 200:
	ok
	I0927 01:31:34.617091   64629 api_server.go:141] control plane version: v1.31.1
	I0927 01:31:34.617114   64629 api_server.go:131] duration metric: took 6.259093ms to wait for apiserver health ...
	I0927 01:31:34.617122   64629 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:31:34.624460   64629 system_pods.go:59] 8 kube-system pods found
	I0927 01:31:34.624488   64629 system_pods.go:61] "coredns-7c65d6cfc9-27b9x" [d01ebd1b-15ce-4260-9515-32939f083360] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:31:34.624497   64629 system_pods.go:61] "coredns-7c65d6cfc9-gkljs" [f4bb3c0f-501d-4f0f-bed0-72919f1c1546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:31:34.624504   64629 system_pods.go:61] "etcd-kubernetes-upgrade-637447" [d42ee8f5-62d5-4f68-801e-91373b53b42b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:31:34.624512   64629 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-637447" [42c8df35-3610-46b5-b6fc-5bf01f9e076d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:31:34.624518   64629 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-637447" [5456e034-8b99-43c8-80d7-25ce9d5e0eff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:31:34.624522   64629 system_pods.go:61] "kube-proxy-b9fq5" [97b59c48-c99f-4da8-b38f-6957f2ec7333] Running
	I0927 01:31:34.624528   64629 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-637447" [0dce86c0-a9e5-4e28-9dd2-0757645eacbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:31:34.624532   64629 system_pods.go:61] "storage-provisioner" [cd7cde7f-115b-4361-88e9-008582982fa5] Running
	I0927 01:31:34.624539   64629 system_pods.go:74] duration metric: took 7.411437ms to wait for pod list to return data ...
	I0927 01:31:34.624547   64629 kubeadm.go:582] duration metric: took 250.445427ms to wait for: map[apiserver:true system_pods:true]
	I0927 01:31:34.624558   64629 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:31:34.628218   64629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:31:34.628239   64629 node_conditions.go:123] node cpu capacity is 2
	I0927 01:31:34.628248   64629 node_conditions.go:105] duration metric: took 3.686559ms to run NodePressure ...
	I0927 01:31:34.628258   64629 start.go:241] waiting for startup goroutines ...
	I0927 01:31:34.711079   64629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:31:34.712919   64629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:31:35.499253   64629 main.go:141] libmachine: Making call to close driver server
	I0927 01:31:35.499280   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .Close
	I0927 01:31:35.499299   64629 main.go:141] libmachine: Making call to close driver server
	I0927 01:31:35.499336   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .Close
	I0927 01:31:35.499596   64629 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:31:35.499611   64629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:31:35.499620   64629 main.go:141] libmachine: Making call to close driver server
	I0927 01:31:35.499628   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .Close
	I0927 01:31:35.499680   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Closing plugin on server side
	I0927 01:31:35.499727   64629 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:31:35.499736   64629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:31:35.499748   64629 main.go:141] libmachine: Making call to close driver server
	I0927 01:31:35.499756   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .Close
	I0927 01:31:35.499840   64629 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:31:35.499888   64629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:31:35.499905   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Closing plugin on server side
	I0927 01:31:35.499994   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Closing plugin on server side
	I0927 01:31:35.500021   64629 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:31:35.500028   64629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:31:35.517165   64629 main.go:141] libmachine: Making call to close driver server
	I0927 01:31:35.517184   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) Calling .Close
	I0927 01:31:35.517434   64629 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:31:35.517449   64629 main.go:141] libmachine: (kubernetes-upgrade-637447) DBG | Closing plugin on server side
	I0927 01:31:35.517452   64629 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:31:35.519047   64629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 01:31:35.520134   64629 addons.go:510] duration metric: took 1.146007028s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 01:31:35.520174   64629 start.go:246] waiting for cluster config update ...
	I0927 01:31:35.520190   64629 start.go:255] writing updated cluster config ...
	I0927 01:31:35.520406   64629 ssh_runner.go:195] Run: rm -f paused
	I0927 01:31:35.569530   64629 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:31:35.571471   64629 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-637447" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.285978700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727400696285813036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ee1512d-b2ea-4c04-a108-0a7df8f7e767 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.286511233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef1aead8-4528-4a18-9917-28c91a7ffeb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.286562412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef1aead8-4528-4a18-9917-28c91a7ffeb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.286868812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e60aa9afff626eefb1c907e564197f234b17c1215df63c2004a0c00f0cea11c,PodSandboxId:53fbd9c0eb57848cfcfa3a6697339a445a96b480ea1fb7cc0861a625a5847d7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727400693663005411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-27b9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01ebd1b-15ce-4260-9515-32939f083360,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea31fe6ab4cb6801c7c22affb7351c5eb0047085bfda5c3e237a0d2565bcd1f,PodSandboxId:a2dbad1a9bd695d3dcf9f38f95fef81a3f7a13322c9cf24d4c493320adc0adc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727400693682879428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gkljs,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f4bb3c0f-501d-4f0f-bed0-72919f1c1546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efe2964e70ea59f1dc41849f1f8354914d915f0ba5ba675b583298577ea5313,PodSandboxId:f131ba27a3d07f838a16616a5d1d84410a369a10b5b410a06cf7cc0b37473b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1727400693236433125,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7cde7f-115b-4361-88e9-008582982fa5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7d9e409275916f57d5a9b0afbc1fafdaadc056678dea3bc1dc037fa655e605,PodSandboxId:e53088da76c6bf1364b7f67d5dbf8c5406a055010ab646aeabd102b2e6a8bc92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1727400693110072558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9fq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b59c48-c99f-4da8-b38f-6957f2ec7333,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0942facae961f5a6b56fef293f7637490051bddd297d56a4eccc2bab20c20b0b,PodSandboxId:b59ffdace288f78041129064fd03e4e221860b8fe8649199c712e796cbd5b4d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727400688423473233,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b2807465a41e570416f163280bf216,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509711d119d577ddeb6e8c40493077306431f2518c103cd5e82af0ab5a036e0b,PodSandboxId:89d6d7c00e3b0d16ffb23e1bbf40a0c9da4408e65fa5f91fe58575ad8e6af8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727400688403636106,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cedb906f69aec2bb3c128b85b689a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:353edc48c512a71114f2b682b483c1eb12ce1e7ca211dbff19fc82c112b9db79,PodSandboxId:d902e40156f19c58472e67d9cdf6d2423669447a42c3003952b0c1687319bf8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727400688309544885,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202eb29c8750738be7bed8cd5a6aa979,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f3c0335c54a2315cf1705ccff33b07065db37194347f6f9c169ffe32d94563,PodSandboxId:5ada53c2b696ee356c5d0f21d34d395a71e5cdfed784cbd45a2d7dbbbc81f20f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727400688248444934,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceb0ed31e7deee00ef96168b98e2448,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba5f80b77ae4a4e17e242a9f51196abca719329f5e963240fe444d100d8f9b21,PodSandboxId:62c281488ca9e337481a9f28f9bac4d7e7a28d6fbbb2f4cc492cd27dd5f2ae13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727400668249381650,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gkljs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bb3c0f-501d-4f0f-bed0-72919f1c1546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156c3e69ca46985931a87742a85c44e956b04910fe2cda37172aa8627f235ca7,PodSandboxId:61f1f788b8a79627504db1b1ac80032edffedd3beb1838f071131437559e8456,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727400668227959364,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-27b9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01ebd1b-15ce-4260-9515-32939f083360,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3232eb333319bf8ae22a928f8ec0d18e212fed14751da8109d79907a76be90,PodSandboxId:11c30a74b559dcd2ff8d40bdfc4f864d43da4549e30bf8
4b948cd0ae5a9040dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727400666397102607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9fq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b59c48-c99f-4da8-b38f-6957f2ec7333,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbab208ca5d34c116a3df62f7a6a92fdb1043d3bbfe4c5ddeee66d1fca944596,PodSandboxId:1aa2cbf9b1f47b76a2d2224ba55a021073394ccfe82da58370092e20fe294055,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727400665768431581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7cde7f-115b-4361-88e9-008582982fa5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2809beee503800f1dd70a75663659b693b691347d8add17273ab336adaa0e744,PodSandboxId:ec41c8844929b342246330ceeb88c6347e49a4575ab1340e9c1532bcd9e25510,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727400655395610844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cedb906f69aec2bb3c128b85b689a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131af23616f00bd808d319a0f92ebcbb0b537fc920b2e85a33d2af038b9f1eb0,PodSandboxId:02721e7807ac8b992feedd63c52b540d9d16c168378d9bc31e23423665dcced6,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727400655369522372,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b2807465a41e570416f163280bf216,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dc0c76f4dc893b190fb54b8ebf49aafa0e77cb69742cadb3c058808cf83c4,PodSandboxId:0c80c829a36b75f8a6ddfd35d5bec7a33249c9f4694a4219722780bab86f00cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727400655391740708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceb0ed31e7deee00ef96168b98e2448,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74cc98a7a52df40f69641a14677fedfe3b0fdb415741e3aa90796f2afda43f5,PodSandboxId:541e3447b349205ef235f4180f6c9508754bf5458be0fb4fa3abfb73ac3e4d96,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727400655266728548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202eb29c8750738be7bed8cd5a6aa979,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef1aead8-4528-4a18-9917-28c91a7ffeb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.338779433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b371a7ed-b057-453e-9791-e1297292e17c name=/runtime.v1.RuntimeService/Version
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.338902344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b371a7ed-b057-453e-9791-e1297292e17c name=/runtime.v1.RuntimeService/Version
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.340778279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f55238e-c7e6-4034-be7d-b5a0d19434e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.341369939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727400696341252199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f55238e-c7e6-4034-be7d-b5a0d19434e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.342468470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd0b1cf7-861e-43a1-99a5-9098f082196f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.342546486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd0b1cf7-861e-43a1-99a5-9098f082196f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.342976185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e60aa9afff626eefb1c907e564197f234b17c1215df63c2004a0c00f0cea11c,PodSandboxId:53fbd9c0eb57848cfcfa3a6697339a445a96b480ea1fb7cc0861a625a5847d7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727400693663005411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-27b9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01ebd1b-15ce-4260-9515-32939f083360,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea31fe6ab4cb6801c7c22affb7351c5eb0047085bfda5c3e237a0d2565bcd1f,PodSandboxId:a2dbad1a9bd695d3dcf9f38f95fef81a3f7a13322c9cf24d4c493320adc0adc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727400693682879428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gkljs,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f4bb3c0f-501d-4f0f-bed0-72919f1c1546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efe2964e70ea59f1dc41849f1f8354914d915f0ba5ba675b583298577ea5313,PodSandboxId:f131ba27a3d07f838a16616a5d1d84410a369a10b5b410a06cf7cc0b37473b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1727400693236433125,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7cde7f-115b-4361-88e9-008582982fa5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7d9e409275916f57d5a9b0afbc1fafdaadc056678dea3bc1dc037fa655e605,PodSandboxId:e53088da76c6bf1364b7f67d5dbf8c5406a055010ab646aeabd102b2e6a8bc92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1727400693110072558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9fq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b59c48-c99f-4da8-b38f-6957f2ec7333,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0942facae961f5a6b56fef293f7637490051bddd297d56a4eccc2bab20c20b0b,PodSandboxId:b59ffdace288f78041129064fd03e4e221860b8fe8649199c712e796cbd5b4d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727400688423473233,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b2807465a41e570416f163280bf216,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509711d119d577ddeb6e8c40493077306431f2518c103cd5e82af0ab5a036e0b,PodSandboxId:89d6d7c00e3b0d16ffb23e1bbf40a0c9da4408e65fa5f91fe58575ad8e6af8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727400688403636106,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cedb906f69aec2bb3c128b85b689a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:353edc48c512a71114f2b682b483c1eb12ce1e7ca211dbff19fc82c112b9db79,PodSandboxId:d902e40156f19c58472e67d9cdf6d2423669447a42c3003952b0c1687319bf8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727400688309544885,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202eb29c8750738be7bed8cd5a6aa979,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f3c0335c54a2315cf1705ccff33b07065db37194347f6f9c169ffe32d94563,PodSandboxId:5ada53c2b696ee356c5d0f21d34d395a71e5cdfed784cbd45a2d7dbbbc81f20f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727400688248444934,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceb0ed31e7deee00ef96168b98e2448,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba5f80b77ae4a4e17e242a9f51196abca719329f5e963240fe444d100d8f9b21,PodSandboxId:62c281488ca9e337481a9f28f9bac4d7e7a28d6fbbb2f4cc492cd27dd5f2ae13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727400668249381650,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gkljs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bb3c0f-501d-4f0f-bed0-72919f1c1546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156c3e69ca46985931a87742a85c44e956b04910fe2cda37172aa8627f235ca7,PodSandboxId:61f1f788b8a79627504db1b1ac80032edffedd3beb1838f071131437559e8456,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727400668227959364,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-27b9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01ebd1b-15ce-4260-9515-32939f083360,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3232eb333319bf8ae22a928f8ec0d18e212fed14751da8109d79907a76be90,PodSandboxId:11c30a74b559dcd2ff8d40bdfc4f864d43da4549e30bf8
4b948cd0ae5a9040dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727400666397102607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9fq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b59c48-c99f-4da8-b38f-6957f2ec7333,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbab208ca5d34c116a3df62f7a6a92fdb1043d3bbfe4c5ddeee66d1fca944596,PodSandboxId:1aa2cbf9b1f47b76a2d2224ba55a021073394ccfe82da58370092e20fe294055,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727400665768431581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7cde7f-115b-4361-88e9-008582982fa5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2809beee503800f1dd70a75663659b693b691347d8add17273ab336adaa0e744,PodSandboxId:ec41c8844929b342246330ceeb88c6347e49a4575ab1340e9c1532bcd9e25510,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727400655395610844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cedb906f69aec2bb3c128b85b689a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131af23616f00bd808d319a0f92ebcbb0b537fc920b2e85a33d2af038b9f1eb0,PodSandboxId:02721e7807ac8b992feedd63c52b540d9d16c168378d9bc31e23423665dcced6,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727400655369522372,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b2807465a41e570416f163280bf216,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dc0c76f4dc893b190fb54b8ebf49aafa0e77cb69742cadb3c058808cf83c4,PodSandboxId:0c80c829a36b75f8a6ddfd35d5bec7a33249c9f4694a4219722780bab86f00cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727400655391740708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceb0ed31e7deee00ef96168b98e2448,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74cc98a7a52df40f69641a14677fedfe3b0fdb415741e3aa90796f2afda43f5,PodSandboxId:541e3447b349205ef235f4180f6c9508754bf5458be0fb4fa3abfb73ac3e4d96,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727400655266728548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202eb29c8750738be7bed8cd5a6aa979,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd0b1cf7-861e-43a1-99a5-9098f082196f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.399775641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a20a702-5ec5-483c-81c9-2685e84ecb0f name=/runtime.v1.RuntimeService/Version
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.399885438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a20a702-5ec5-483c-81c9-2685e84ecb0f name=/runtime.v1.RuntimeService/Version
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.401760741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c23051a1-7746-44b9-be25-2f480d86db49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.402430441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727400696402391804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c23051a1-7746-44b9-be25-2f480d86db49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.403461123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7c760b7-5508-43d9-a247-7433a310e98e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.403538327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7c760b7-5508-43d9-a247-7433a310e98e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.403962311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e60aa9afff626eefb1c907e564197f234b17c1215df63c2004a0c00f0cea11c,PodSandboxId:53fbd9c0eb57848cfcfa3a6697339a445a96b480ea1fb7cc0861a625a5847d7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727400693663005411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-27b9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01ebd1b-15ce-4260-9515-32939f083360,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea31fe6ab4cb6801c7c22affb7351c5eb0047085bfda5c3e237a0d2565bcd1f,PodSandboxId:a2dbad1a9bd695d3dcf9f38f95fef81a3f7a13322c9cf24d4c493320adc0adc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727400693682879428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gkljs,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f4bb3c0f-501d-4f0f-bed0-72919f1c1546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efe2964e70ea59f1dc41849f1f8354914d915f0ba5ba675b583298577ea5313,PodSandboxId:f131ba27a3d07f838a16616a5d1d84410a369a10b5b410a06cf7cc0b37473b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1727400693236433125,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7cde7f-115b-4361-88e9-008582982fa5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7d9e409275916f57d5a9b0afbc1fafdaadc056678dea3bc1dc037fa655e605,PodSandboxId:e53088da76c6bf1364b7f67d5dbf8c5406a055010ab646aeabd102b2e6a8bc92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1727400693110072558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9fq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b59c48-c99f-4da8-b38f-6957f2ec7333,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0942facae961f5a6b56fef293f7637490051bddd297d56a4eccc2bab20c20b0b,PodSandboxId:b59ffdace288f78041129064fd03e4e221860b8fe8649199c712e796cbd5b4d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727400688423473233,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b2807465a41e570416f163280bf216,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509711d119d577ddeb6e8c40493077306431f2518c103cd5e82af0ab5a036e0b,PodSandboxId:89d6d7c00e3b0d16ffb23e1bbf40a0c9da4408e65fa5f91fe58575ad8e6af8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727400688403636106,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cedb906f69aec2bb3c128b85b689a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:353edc48c512a71114f2b682b483c1eb12ce1e7ca211dbff19fc82c112b9db79,PodSandboxId:d902e40156f19c58472e67d9cdf6d2423669447a42c3003952b0c1687319bf8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727400688309544885,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202eb29c8750738be7bed8cd5a6aa979,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f3c0335c54a2315cf1705ccff33b07065db37194347f6f9c169ffe32d94563,PodSandboxId:5ada53c2b696ee356c5d0f21d34d395a71e5cdfed784cbd45a2d7dbbbc81f20f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727400688248444934,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceb0ed31e7deee00ef96168b98e2448,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba5f80b77ae4a4e17e242a9f51196abca719329f5e963240fe444d100d8f9b21,PodSandboxId:62c281488ca9e337481a9f28f9bac4d7e7a28d6fbbb2f4cc492cd27dd5f2ae13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727400668249381650,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gkljs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bb3c0f-501d-4f0f-bed0-72919f1c1546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156c3e69ca46985931a87742a85c44e956b04910fe2cda37172aa8627f235ca7,PodSandboxId:61f1f788b8a79627504db1b1ac80032edffedd3beb1838f071131437559e8456,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727400668227959364,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-27b9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01ebd1b-15ce-4260-9515-32939f083360,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3232eb333319bf8ae22a928f8ec0d18e212fed14751da8109d79907a76be90,PodSandboxId:11c30a74b559dcd2ff8d40bdfc4f864d43da4549e30bf8
4b948cd0ae5a9040dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727400666397102607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9fq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b59c48-c99f-4da8-b38f-6957f2ec7333,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbab208ca5d34c116a3df62f7a6a92fdb1043d3bbfe4c5ddeee66d1fca944596,PodSandboxId:1aa2cbf9b1f47b76a2d2224ba55a021073394ccfe82da58370092e20fe294055,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727400665768431581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7cde7f-115b-4361-88e9-008582982fa5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2809beee503800f1dd70a75663659b693b691347d8add17273ab336adaa0e744,PodSandboxId:ec41c8844929b342246330ceeb88c6347e49a4575ab1340e9c1532bcd9e25510,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727400655395610844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cedb906f69aec2bb3c128b85b689a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131af23616f00bd808d319a0f92ebcbb0b537fc920b2e85a33d2af038b9f1eb0,PodSandboxId:02721e7807ac8b992feedd63c52b540d9d16c168378d9bc31e23423665dcced6,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727400655369522372,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b2807465a41e570416f163280bf216,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dc0c76f4dc893b190fb54b8ebf49aafa0e77cb69742cadb3c058808cf83c4,PodSandboxId:0c80c829a36b75f8a6ddfd35d5bec7a33249c9f4694a4219722780bab86f00cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727400655391740708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceb0ed31e7deee00ef96168b98e2448,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74cc98a7a52df40f69641a14677fedfe3b0fdb415741e3aa90796f2afda43f5,PodSandboxId:541e3447b349205ef235f4180f6c9508754bf5458be0fb4fa3abfb73ac3e4d96,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727400655266728548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202eb29c8750738be7bed8cd5a6aa979,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7c760b7-5508-43d9-a247-7433a310e98e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.454931079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=413c9de6-3131-428b-aa70-d90b99fa559f name=/runtime.v1.RuntimeService/Version
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.455031285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=413c9de6-3131-428b-aa70-d90b99fa559f name=/runtime.v1.RuntimeService/Version
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.456554743Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c2174f6-0efa-4fbe-9307-8e66948a63b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.457081707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727400696457047792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c2174f6-0efa-4fbe-9307-8e66948a63b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.458073309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0029642-ec05-4ba1-a203-483fbc32dcb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.458154921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0029642-ec05-4ba1-a203-483fbc32dcb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:31:36 kubernetes-upgrade-637447 crio[2374]: time="2024-09-27 01:31:36.458793640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e60aa9afff626eefb1c907e564197f234b17c1215df63c2004a0c00f0cea11c,PodSandboxId:53fbd9c0eb57848cfcfa3a6697339a445a96b480ea1fb7cc0861a625a5847d7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727400693663005411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-27b9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01ebd1b-15ce-4260-9515-32939f083360,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea31fe6ab4cb6801c7c22affb7351c5eb0047085bfda5c3e237a0d2565bcd1f,PodSandboxId:a2dbad1a9bd695d3dcf9f38f95fef81a3f7a13322c9cf24d4c493320adc0adc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727400693682879428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gkljs,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f4bb3c0f-501d-4f0f-bed0-72919f1c1546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efe2964e70ea59f1dc41849f1f8354914d915f0ba5ba675b583298577ea5313,PodSandboxId:f131ba27a3d07f838a16616a5d1d84410a369a10b5b410a06cf7cc0b37473b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1727400693236433125,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7cde7f-115b-4361-88e9-008582982fa5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7d9e409275916f57d5a9b0afbc1fafdaadc056678dea3bc1dc037fa655e605,PodSandboxId:e53088da76c6bf1364b7f67d5dbf8c5406a055010ab646aeabd102b2e6a8bc92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1727400693110072558,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9fq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b59c48-c99f-4da8-b38f-6957f2ec7333,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0942facae961f5a6b56fef293f7637490051bddd297d56a4eccc2bab20c20b0b,PodSandboxId:b59ffdace288f78041129064fd03e4e221860b8fe8649199c712e796cbd5b4d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727400688423473233,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b2807465a41e570416f163280bf216,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509711d119d577ddeb6e8c40493077306431f2518c103cd5e82af0ab5a036e0b,PodSandboxId:89d6d7c00e3b0d16ffb23e1bbf40a0c9da4408e65fa5f91fe58575ad8e6af8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727400688403636106,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cedb906f69aec2bb3c128b85b689a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:353edc48c512a71114f2b682b483c1eb12ce1e7ca211dbff19fc82c112b9db79,PodSandboxId:d902e40156f19c58472e67d9cdf6d2423669447a42c3003952b0c1687319bf8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727400688309544885,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202eb29c8750738be7bed8cd5a6aa979,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f3c0335c54a2315cf1705ccff33b07065db37194347f6f9c169ffe32d94563,PodSandboxId:5ada53c2b696ee356c5d0f21d34d395a71e5cdfed784cbd45a2d7dbbbc81f20f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727400688248444934,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceb0ed31e7deee00ef96168b98e2448,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba5f80b77ae4a4e17e242a9f51196abca719329f5e963240fe444d100d8f9b21,PodSandboxId:62c281488ca9e337481a9f28f9bac4d7e7a28d6fbbb2f4cc492cd27dd5f2ae13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727400668249381650,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gkljs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bb3c0f-501d-4f0f-bed0-72919f1c1546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:156c3e69ca46985931a87742a85c44e956b04910fe2cda37172aa8627f235ca7,PodSandboxId:61f1f788b8a79627504db1b1ac80032edffedd3beb1838f071131437559e8456,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727400668227959364,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-27b9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01ebd1b-15ce-4260-9515-32939f083360,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3232eb333319bf8ae22a928f8ec0d18e212fed14751da8109d79907a76be90,PodSandboxId:11c30a74b559dcd2ff8d40bdfc4f864d43da4549e30bf8
4b948cd0ae5a9040dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727400666397102607,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b9fq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b59c48-c99f-4da8-b38f-6957f2ec7333,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbab208ca5d34c116a3df62f7a6a92fdb1043d3bbfe4c5ddeee66d1fca944596,PodSandboxId:1aa2cbf9b1f47b76a2d2224ba55a021073394ccfe82da58370092e20fe294055,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727400665768431581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7cde7f-115b-4361-88e9-008582982fa5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2809beee503800f1dd70a75663659b693b691347d8add17273ab336adaa0e744,PodSandboxId:ec41c8844929b342246330ceeb88c6347e49a4575ab1340e9c1532bcd9e25510,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727400655395610844,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cedb906f69aec2bb3c128b85b689a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131af23616f00bd808d319a0f92ebcbb0b537fc920b2e85a33d2af038b9f1eb0,PodSandboxId:02721e7807ac8b992feedd63c52b540d9d16c168378d9bc31e23423665dcced6,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727400655369522372,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b2807465a41e570416f163280bf216,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881dc0c76f4dc893b190fb54b8ebf49aafa0e77cb69742cadb3c058808cf83c4,PodSandboxId:0c80c829a36b75f8a6ddfd35d5bec7a33249c9f4694a4219722780bab86f00cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727400655391740708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceb0ed31e7deee00ef96168b98e2448,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74cc98a7a52df40f69641a14677fedfe3b0fdb415741e3aa90796f2afda43f5,PodSandboxId:541e3447b349205ef235f4180f6c9508754bf5458be0fb4fa3abfb73ac3e4d96,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727400655266728548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-637447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202eb29c8750738be7bed8cd5a6aa979,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0029642-ec05-4ba1-a203-483fbc32dcb7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2ea31fe6ab4cb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago       Running             coredns                   1                   a2dbad1a9bd69       coredns-7c65d6cfc9-gkljs
	1e60aa9afff62       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago       Running             coredns                   1                   53fbd9c0eb578       coredns-7c65d6cfc9-27b9x
	8efe2964e70ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       1                   f131ba27a3d07       storage-provisioner
	0c7d9e4092759       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                1                   e53088da76c6b       kube-proxy-b9fq5
	0942facae961f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      1                   b59ffdace288f       etcd-kubernetes-upgrade-637447
	509711d119d57       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   8 seconds ago       Running             kube-scheduler            1                   89d6d7c00e3b0       kube-scheduler-kubernetes-upgrade-637447
	353edc48c512a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   8 seconds ago       Running             kube-apiserver            1                   d902e40156f19       kube-apiserver-kubernetes-upgrade-637447
	a5f3c0335c54a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   8 seconds ago       Running             kube-controller-manager   1                   5ada53c2b696e       kube-controller-manager-kubernetes-upgrade-637447
	ba5f80b77ae4a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago      Exited              coredns                   0                   62c281488ca9e       coredns-7c65d6cfc9-gkljs
	156c3e69ca469       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago      Exited              coredns                   0                   61f1f788b8a79       coredns-7c65d6cfc9-27b9x
	9d3232eb33331       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   30 seconds ago      Exited              kube-proxy                0                   11c30a74b559d       kube-proxy-b9fq5
	dbab208ca5d34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   30 seconds ago      Exited              storage-provisioner       0                   1aa2cbf9b1f47       storage-provisioner
	2809beee50380       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   41 seconds ago      Exited              kube-scheduler            0                   ec41c8844929b       kube-scheduler-kubernetes-upgrade-637447
	881dc0c76f4dc       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   41 seconds ago      Exited              kube-controller-manager   0                   0c80c829a36b7       kube-controller-manager-kubernetes-upgrade-637447
	131af23616f00       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   41 seconds ago      Exited              etcd                      0                   02721e7807ac8       etcd-kubernetes-upgrade-637447
	a74cc98a7a52d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   41 seconds ago      Exited              kube-apiserver            0                   541e3447b3492       kube-apiserver-kubernetes-upgrade-637447
	
	
	==> coredns [156c3e69ca46985931a87742a85c44e956b04910fe2cda37172aa8627f235ca7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1e60aa9afff626eefb1c907e564197f234b17c1215df63c2004a0c00f0cea11c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [2ea31fe6ab4cb6801c7c22affb7351c5eb0047085bfda5c3e237a0d2565bcd1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ba5f80b77ae4a4e17e242a9f51196abca719329f5e963240fe444d100d8f9b21] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-637447
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-637447
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:30:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-637447
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:31:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:31:31 +0000   Fri, 27 Sep 2024 01:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:31:31 +0000   Fri, 27 Sep 2024 01:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:31:31 +0000   Fri, 27 Sep 2024 01:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:31:31 +0000   Fri, 27 Sep 2024 01:31:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.182
	  Hostname:    kubernetes-upgrade-637447
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bf54a896f8b4135af555b5e8aeae838
	  System UUID:                6bf54a89-6f8b-4135-af55-5b5e8aeae838
	  Boot ID:                    9956c2da-9334-4bf2-9dc8-3e18dde1e240
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-27b9x                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     30s
	  kube-system                 coredns-7c65d6cfc9-gkljs                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     30s
	  kube-system                 etcd-kubernetes-upgrade-637447                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         35s
	  kube-system                 kube-apiserver-kubernetes-upgrade-637447             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-637447    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-b9fq5                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-kubernetes-upgrade-637447             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node kubernetes-upgrade-637447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node kubernetes-upgrade-637447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 42s)  kubelet          Node kubernetes-upgrade-637447 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           31s                node-controller  Node kubernetes-upgrade-637447 event: Registered Node kubernetes-upgrade-637447 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-637447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-637447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-637447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-637447 event: Registered Node kubernetes-upgrade-637447 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.435072] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.062352] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064168] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.205540] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.126667] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.296807] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +4.563736] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +0.072028] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.277026] systemd-fstab-generator[849]: Ignoring "noauto" option for root device
	[Sep27 01:31] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.086962] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.508104] kauditd_printk_skb: 62 callbacks suppressed
	[ +10.363568] systemd-fstab-generator[2204]: Ignoring "noauto" option for root device
	[  +0.081884] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.063606] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.184143] systemd-fstab-generator[2230]: Ignoring "noauto" option for root device
	[  +0.148298] systemd-fstab-generator[2242]: Ignoring "noauto" option for root device
	[  +0.425614] systemd-fstab-generator[2331]: Ignoring "noauto" option for root device
	[  +6.285134] systemd-fstab-generator[2457]: Ignoring "noauto" option for root device
	[  +0.067857] kauditd_printk_skb: 117 callbacks suppressed
	[  +2.206323] systemd-fstab-generator[2578]: Ignoring "noauto" option for root device
	[  +5.611679] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.532349] systemd-fstab-generator[3448]: Ignoring "noauto" option for root device
	
	
	==> etcd [0942facae961f5a6b56fef293f7637490051bddd297d56a4eccc2bab20c20b0b] <==
	{"level":"info","ts":"2024-09-27T01:31:28.875388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 switched to configuration voters=(13434860627913309153)"}
	{"level":"info","ts":"2024-09-27T01:31:28.875660Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"121410669cdaaf0c","local-member-id":"ba72368f65a77be1","added-peer-id":"ba72368f65a77be1","added-peer-peer-urls":["https://192.168.50.182:2380"]}
	{"level":"info","ts":"2024-09-27T01:31:28.876002Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"121410669cdaaf0c","local-member-id":"ba72368f65a77be1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:31:28.878395Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:31:28.892889Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T01:31:28.893622Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.182:2380"}
	{"level":"info","ts":"2024-09-27T01:31:28.897362Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.182:2380"}
	{"level":"info","ts":"2024-09-27T01:31:28.900505Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ba72368f65a77be1","initial-advertise-peer-urls":["https://192.168.50.182:2380"],"listen-peer-urls":["https://192.168.50.182:2380"],"advertise-client-urls":["https://192.168.50.182:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.182:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T01:31:28.900574Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T01:31:30.207649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T01:31:30.207756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T01:31:30.207792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 received MsgPreVoteResp from ba72368f65a77be1 at term 2"}
	{"level":"info","ts":"2024-09-27T01:31:30.207822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T01:31:30.207846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 received MsgVoteResp from ba72368f65a77be1 at term 3"}
	{"level":"info","ts":"2024-09-27T01:31:30.207874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T01:31:30.207899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ba72368f65a77be1 elected leader ba72368f65a77be1 at term 3"}
	{"level":"info","ts":"2024-09-27T01:31:30.213635Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ba72368f65a77be1","local-member-attributes":"{Name:kubernetes-upgrade-637447 ClientURLs:[https://192.168.50.182:2379]}","request-path":"/0/members/ba72368f65a77be1/attributes","cluster-id":"121410669cdaaf0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:31:30.213895Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:31:30.214420Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:31:30.214550Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:31:30.214593Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:31:30.215205Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:31:30.215966Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:31:30.216236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T01:31:30.216904Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.182:2379"}
	
	
	==> etcd [131af23616f00bd808d319a0f92ebcbb0b537fc920b2e85a33d2af038b9f1eb0] <==
	{"level":"info","ts":"2024-09-27T01:30:55.807685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 became leader at term 2"}
	{"level":"info","ts":"2024-09-27T01:30:55.807717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ba72368f65a77be1 elected leader ba72368f65a77be1 at term 2"}
	{"level":"info","ts":"2024-09-27T01:30:55.812522Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:30:55.816568Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ba72368f65a77be1","local-member-attributes":"{Name:kubernetes-upgrade-637447 ClientURLs:[https://192.168.50.182:2379]}","request-path":"/0/members/ba72368f65a77be1/attributes","cluster-id":"121410669cdaaf0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:30:55.816714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:30:55.817025Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:30:55.816508Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"121410669cdaaf0c","local-member-id":"ba72368f65a77be1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:30:55.819373Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:30:55.819429Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:30:55.819997Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:30:55.820752Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T01:30:55.823589Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:30:55.823634Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:30:55.824086Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:30:55.835238Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.182:2379"}
	{"level":"info","ts":"2024-09-27T01:31:11.246519Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-27T01:31:11.246590Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-637447","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.182:2380"],"advertise-client-urls":["https://192.168.50.182:2379"]}
	{"level":"warn","ts":"2024-09-27T01:31:11.246674Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:31:11.246770Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:31:11.308682Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.182:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T01:31:11.308759Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.182:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T01:31:11.308821Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ba72368f65a77be1","current-leader-member-id":"ba72368f65a77be1"}
	{"level":"info","ts":"2024-09-27T01:31:11.354857Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.182:2380"}
	{"level":"info","ts":"2024-09-27T01:31:11.355792Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.182:2380"}
	{"level":"info","ts":"2024-09-27T01:31:11.355843Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-637447","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.182:2380"],"advertise-client-urls":["https://192.168.50.182:2379"]}
	
	
	==> kernel <==
	 01:31:36 up 1 min,  0 users,  load average: 2.97, 0.75, 0.25
	Linux kubernetes-upgrade-637447 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [353edc48c512a71114f2b682b483c1eb12ce1e7ca211dbff19fc82c112b9db79] <==
	I0927 01:31:31.724893       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 01:31:31.724949       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 01:31:31.726958       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 01:31:31.750806       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 01:31:31.758179       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 01:31:31.758432       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 01:31:31.758496       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 01:31:31.763384       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 01:31:31.763504       1 aggregator.go:171] initial CRD sync complete...
	I0927 01:31:31.763530       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 01:31:31.763584       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 01:31:31.763609       1 cache.go:39] Caches are synced for autoregister controller
	E0927 01:31:31.771515       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0927 01:31:31.793432       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 01:31:31.799774       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 01:31:31.799840       1 policy_source.go:224] refreshing policies
	I0927 01:31:31.853838       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 01:31:32.631924       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 01:31:33.533096       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 01:31:34.239318       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 01:31:34.257553       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 01:31:34.300459       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 01:31:34.332567       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 01:31:34.341896       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 01:31:35.130164       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a74cc98a7a52df40f69641a14677fedfe3b0fdb415741e3aa90796f2afda43f5] <==
	W0927 01:31:11.304014       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.304096       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.304195       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.304363       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.304457       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.304537       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.304617       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.304689       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.302886       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.303363       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.303473       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.304376       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305101       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.303153       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305350       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305488       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305644       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305731       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305873       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305977       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.306063       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.306249       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.306470       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305200       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:31:11.305493       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [881dc0c76f4dc893b190fb54b8ebf49aafa0e77cb69742cadb3c058808cf83c4] <==
	I0927 01:31:05.350440       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-637447"
	I0927 01:31:05.352054       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0927 01:31:05.377161       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0927 01:31:05.377217       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0927 01:31:05.377235       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0927 01:31:05.377821       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0927 01:31:05.389458       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 01:31:05.400666       1 shared_informer.go:320] Caches are synced for disruption
	I0927 01:31:05.432205       1 shared_informer.go:320] Caches are synced for deployment
	I0927 01:31:05.432866       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 01:31:05.739246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-637447"
	I0927 01:31:05.859495       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 01:31:05.877384       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 01:31:05.877463       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0927 01:31:06.168872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="134.108703ms"
	I0927 01:31:06.178528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.558407ms"
	I0927 01:31:06.179481       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.636µs"
	I0927 01:31:06.181746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="75.64µs"
	I0927 01:31:06.192183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.358µs"
	I0927 01:31:08.580727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.509µs"
	I0927 01:31:08.618201       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.278407ms"
	I0927 01:31:08.618349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="104.094µs"
	I0927 01:31:08.645041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.780214ms"
	I0927 01:31:08.645131       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="44.911µs"
	I0927 01:31:08.944745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-637447"
	
	
	==> kube-controller-manager [a5f3c0335c54a2315cf1705ccff33b07065db37194347f6f9c169ffe32d94563] <==
	I0927 01:31:35.127584       1 shared_informer.go:320] Caches are synced for job
	I0927 01:31:35.127612       1 shared_informer.go:320] Caches are synced for TTL
	I0927 01:31:35.127767       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0927 01:31:35.130212       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0927 01:31:35.136489       1 shared_informer.go:320] Caches are synced for endpoint
	I0927 01:31:35.136605       1 shared_informer.go:320] Caches are synced for cronjob
	I0927 01:31:35.140847       1 shared_informer.go:320] Caches are synced for crt configmap
	I0927 01:31:35.147450       1 shared_informer.go:320] Caches are synced for attach detach
	I0927 01:31:35.147632       1 shared_informer.go:320] Caches are synced for taint
	I0927 01:31:35.147686       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0927 01:31:35.147737       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-637447"
	I0927 01:31:35.147762       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0927 01:31:35.153434       1 shared_informer.go:320] Caches are synced for persistent volume
	I0927 01:31:35.158937       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0927 01:31:35.187102       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0927 01:31:35.348011       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 01:31:35.365513       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 01:31:35.365587       1 shared_informer.go:320] Caches are synced for HPA
	I0927 01:31:35.498774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="370.923177ms"
	I0927 01:31:35.499087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.617µs"
	I0927 01:31:35.772258       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 01:31:35.779496       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 01:31:35.779540       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0927 01:31:35.963264       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.133045ms"
	I0927 01:31:35.965138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.443µs"
	
	
	==> kube-proxy [0c7d9e409275916f57d5a9b0afbc1fafdaadc056678dea3bc1dc037fa655e605] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:31:33.771773       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:31:33.825951       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.182"]
	E0927 01:31:33.826009       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:31:33.988511       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:31:33.988571       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:31:33.988598       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:31:34.003803       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:31:34.005674       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:31:34.005689       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:31:34.010858       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:31:34.010861       1 config.go:199] "Starting service config controller"
	I0927 01:31:34.010901       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:31:34.010900       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:31:34.011675       1 config.go:328] "Starting node config controller"
	I0927 01:31:34.011681       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:31:34.112039       1 shared_informer.go:320] Caches are synced for node config
	I0927 01:31:34.112085       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:31:34.112105       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [9d3232eb333319bf8ae22a928f8ec0d18e212fed14751da8109d79907a76be90] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:31:06.609725       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:31:06.620761       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.182"]
	E0927 01:31:06.620886       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:31:06.659660       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:31:06.659696       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:31:06.659759       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:31:06.662328       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:31:06.662702       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:31:06.662714       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:31:06.664449       1 config.go:199] "Starting service config controller"
	I0927 01:31:06.664500       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:31:06.664550       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:31:06.664567       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:31:06.665016       1 config.go:328] "Starting node config controller"
	I0927 01:31:06.666837       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:31:06.765135       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:31:06.765165       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:31:06.767004       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2809beee503800f1dd70a75663659b693b691347d8add17273ab336adaa0e744] <==
	W0927 01:30:58.582369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:30:58.582534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:58.583093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 01:30:58.585397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:58.585864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 01:30:58.586044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:59.415640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 01:30:59.415855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:59.631657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:30:59.631797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:59.675603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 01:30:59.675784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:59.737947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 01:30:59.737990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:59.800506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:30:59.800634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:59.837369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 01:30:59.837512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:30:59.872108       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:30:59.872247       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0927 01:31:02.035978       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:31:11.268726       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0927 01:31:11.270191       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0927 01:31:11.270322       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0927 01:31:11.276664       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [509711d119d577ddeb6e8c40493077306431f2518c103cd5e82af0ab5a036e0b] <==
	I0927 01:31:29.402435       1 serving.go:386] Generated self-signed cert in-memory
	W0927 01:31:31.669756       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 01:31:31.669861       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 01:31:31.669887       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 01:31:31.669900       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 01:31:31.768265       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 01:31:31.768361       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:31:31.772887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 01:31:31.773040       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 01:31:31.773087       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:31:31.773105       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 01:31:31.874577       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 01:31:27 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:27.934785    2585 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-637447"
	Sep 27 01:31:27 kubernetes-upgrade-637447 kubelet[2585]: E0927 01:31:27.935860    2585 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.182:8443: connect: connection refused" node="kubernetes-upgrade-637447"
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: E0927 01:31:28.162069    2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-637447?timeout=10s\": dial tcp 192.168.50.182:8443: connect: connection refused" interval="800ms"
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:28.341084    2585 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-637447"
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: E0927 01:31:28.346066    2585 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.182:8443: connect: connection refused" node="kubernetes-upgrade-637447"
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: W0927 01:31:28.410728    2585 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-637447&limit=500&resourceVersion=0": dial tcp 192.168.50.182:8443: connect: connection refused
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: E0927 01:31:28.410827    2585 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-637447&limit=500&resourceVersion=0\": dial tcp 192.168.50.182:8443: connect: connection refused" logger="UnhandledError"
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: W0927 01:31:28.684890    2585 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.182:8443: connect: connection refused
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: E0927 01:31:28.684972    2585 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.182:8443: connect: connection refused" logger="UnhandledError"
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: W0927 01:31:28.700457    2585 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.182:8443: connect: connection refused
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: E0927 01:31:28.700540    2585 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.182:8443: connect: connection refused" logger="UnhandledError"
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: W0927 01:31:28.771031    2585 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.50.182:8443: connect: connection refused
	Sep 27 01:31:28 kubernetes-upgrade-637447 kubelet[2585]: E0927 01:31:28.771153    2585 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.50.182:8443: connect: connection refused" logger="UnhandledError"
	Sep 27 01:31:29 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:29.147593    2585 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-637447"
	Sep 27 01:31:31 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:31.861117    2585 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-637447"
	Sep 27 01:31:31 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:31.861502    2585 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-637447"
	Sep 27 01:31:31 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:31.861618    2585 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 27 01:31:31 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:31.863212    2585 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 27 01:31:32 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:32.538797    2585 apiserver.go:52] "Watching apiserver"
	Sep 27 01:31:32 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:32.552344    2585 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 27 01:31:32 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:32.560683    2585 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97b59c48-c99f-4da8-b38f-6957f2ec7333-xtables-lock\") pod \"kube-proxy-b9fq5\" (UID: \"97b59c48-c99f-4da8-b38f-6957f2ec7333\") " pod="kube-system/kube-proxy-b9fq5"
	Sep 27 01:31:32 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:32.560806    2585 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97b59c48-c99f-4da8-b38f-6957f2ec7333-lib-modules\") pod \"kube-proxy-b9fq5\" (UID: \"97b59c48-c99f-4da8-b38f-6957f2ec7333\") " pod="kube-system/kube-proxy-b9fq5"
	Sep 27 01:31:32 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:32.560932    2585 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cd7cde7f-115b-4361-88e9-008582982fa5-tmp\") pod \"storage-provisioner\" (UID: \"cd7cde7f-115b-4361-88e9-008582982fa5\") " pod="kube-system/storage-provisioner"
	Sep 27 01:31:35 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:35.846104    2585 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 27 01:31:35 kubernetes-upgrade-637447 kubelet[2585]: I0927 01:31:35.846615    2585 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [8efe2964e70ea59f1dc41849f1f8354914d915f0ba5ba675b583298577ea5313] <==
	I0927 01:31:33.485421       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:31:33.524843       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:31:33.524981       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:31:33.588819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:31:33.589105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-637447_d3223152-39eb-44cb-8229-57f63229e801!
	I0927 01:31:33.589526       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c92d7af0-bd49-4aa8-860b-541ca49459db", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-637447_d3223152-39eb-44cb-8229-57f63229e801 became leader
	I0927 01:31:33.691486       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-637447_d3223152-39eb-44cb-8229-57f63229e801!
	
	
	==> storage-provisioner [dbab208ca5d34c116a3df62f7a6a92fdb1043d3bbfe4c5ddeee66d1fca944596] <==
	I0927 01:31:05.854425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-637447 -n kubernetes-upgrade-637447
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-637447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-637447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-637447
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-637447: (1.103375198s)
--- FAIL: TestKubernetesUpgrade (372.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (285.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-612261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-612261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m44.926057899s)

                                                
                                                
-- stdout --
	* [old-k8s-version-612261] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-612261" primary control-plane node in "old-k8s-version-612261" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:31:03.782059   64877 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:31:03.782311   64877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:31:03.782320   64877 out.go:358] Setting ErrFile to fd 2...
	I0927 01:31:03.782326   64877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:31:03.782497   64877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:31:03.783067   64877 out.go:352] Setting JSON to false
	I0927 01:31:03.783969   64877 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8009,"bootTime":1727392655,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:31:03.784102   64877 start.go:139] virtualization: kvm guest
	I0927 01:31:03.786007   64877 out.go:177] * [old-k8s-version-612261] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:31:03.787250   64877 notify.go:220] Checking for updates...
	I0927 01:31:03.787275   64877 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:31:03.788530   64877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:31:03.789636   64877 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:31:03.790851   64877 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:31:03.791993   64877 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:31:03.793124   64877 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:31:03.794732   64877 config.go:182] Loaded profile config "NoKubernetes-719096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:31:03.794861   64877 config.go:182] Loaded profile config "cert-expiration-595331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:31:03.794987   64877 config.go:182] Loaded profile config "kubernetes-upgrade-637447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:31:03.795095   64877 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:31:03.828033   64877 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 01:31:03.829369   64877 start.go:297] selected driver: kvm2
	I0927 01:31:03.829381   64877 start.go:901] validating driver "kvm2" against <nil>
	I0927 01:31:03.829391   64877 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:31:03.830051   64877 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:31:03.830130   64877 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:31:03.845810   64877 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:31:03.845864   64877 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 01:31:03.846093   64877 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:31:03.846124   64877 cni.go:84] Creating CNI manager for ""
	I0927 01:31:03.846160   64877 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:31:03.846168   64877 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 01:31:03.846212   64877 start.go:340] cluster config:
	{Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:31:03.846316   64877 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:31:03.847949   64877 out.go:177] * Starting "old-k8s-version-612261" primary control-plane node in "old-k8s-version-612261" cluster
	I0927 01:31:03.849146   64877 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:31:03.849180   64877 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0927 01:31:03.849189   64877 cache.go:56] Caching tarball of preloaded images
	I0927 01:31:03.849251   64877 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:31:03.849260   64877 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0927 01:31:03.849342   64877 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:31:03.849358   64877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json: {Name:mk177889279167f886e19dbcc4f3f884ae14b59e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:03.849474   64877 start.go:360] acquireMachinesLock for old-k8s-version-612261: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:31:17.420327   64877 start.go:364] duration metric: took 13.570807321s to acquireMachinesLock for "old-k8s-version-612261"
	I0927 01:31:17.420416   64877 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:31:17.420529   64877 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 01:31:17.584121   64877 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 01:31:17.584345   64877 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:31:17.584396   64877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:31:17.599697   64877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0927 01:31:17.600102   64877 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:31:17.600653   64877 main.go:141] libmachine: Using API Version  1
	I0927 01:31:17.600680   64877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:31:17.601039   64877 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:31:17.601249   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:31:17.601402   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:31:17.601572   64877 start.go:159] libmachine.API.Create for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:31:17.601607   64877 client.go:168] LocalClient.Create starting
	I0927 01:31:17.601642   64877 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 01:31:17.601690   64877 main.go:141] libmachine: Decoding PEM data...
	I0927 01:31:17.601715   64877 main.go:141] libmachine: Parsing certificate...
	I0927 01:31:17.601803   64877 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 01:31:17.601832   64877 main.go:141] libmachine: Decoding PEM data...
	I0927 01:31:17.601850   64877 main.go:141] libmachine: Parsing certificate...
	I0927 01:31:17.601877   64877 main.go:141] libmachine: Running pre-create checks...
	I0927 01:31:17.601906   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .PreCreateCheck
	I0927 01:31:17.602299   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:31:17.602724   64877 main.go:141] libmachine: Creating machine...
	I0927 01:31:17.602739   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .Create
	I0927 01:31:17.602861   64877 main.go:141] libmachine: (old-k8s-version-612261) Creating KVM machine...
	I0927 01:31:17.604138   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found existing default KVM network
	I0927 01:31:17.605761   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:17.605607   65054 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:74:66:6a} reservation:<nil>}
	I0927 01:31:17.606732   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:17.606649   65054 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:02:a0:8f} reservation:<nil>}
	I0927 01:31:17.607636   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:17.607568   65054 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:76:b1:b5} reservation:<nil>}
	I0927 01:31:17.608827   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:17.608748   65054 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028d9b0}
	I0927 01:31:17.608843   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | created network xml: 
	I0927 01:31:17.608854   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | <network>
	I0927 01:31:17.608863   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |   <name>mk-old-k8s-version-612261</name>
	I0927 01:31:17.608872   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |   <dns enable='no'/>
	I0927 01:31:17.608879   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |   
	I0927 01:31:17.608889   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0927 01:31:17.608898   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |     <dhcp>
	I0927 01:31:17.608910   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0927 01:31:17.608935   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |     </dhcp>
	I0927 01:31:17.608944   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |   </ip>
	I0927 01:31:17.608953   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG |   
	I0927 01:31:17.608975   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | </network>
	I0927 01:31:17.608985   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | 
	I0927 01:31:17.694183   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | trying to create private KVM network mk-old-k8s-version-612261 192.168.72.0/24...
	I0927 01:31:17.775280   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | private KVM network mk-old-k8s-version-612261 192.168.72.0/24 created
	I0927 01:31:17.775398   64877 main.go:141] libmachine: (old-k8s-version-612261) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261 ...
	I0927 01:31:17.775416   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:17.775255   65054 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:31:17.775476   64877 main.go:141] libmachine: (old-k8s-version-612261) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 01:31:17.775507   64877 main.go:141] libmachine: (old-k8s-version-612261) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 01:31:18.018451   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:18.018318   65054 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa...
	I0927 01:31:18.194282   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:18.194122   65054 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/old-k8s-version-612261.rawdisk...
	I0927 01:31:18.194320   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Writing magic tar header
	I0927 01:31:18.194343   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Writing SSH key tar header
	I0927 01:31:18.194355   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:18.194306   65054 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261 ...
	I0927 01:31:18.194490   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261
	I0927 01:31:18.194524   64877 main.go:141] libmachine: (old-k8s-version-612261) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261 (perms=drwx------)
	I0927 01:31:18.194537   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 01:31:18.194552   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:31:18.194560   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 01:31:18.194570   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 01:31:18.194577   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Checking permissions on dir: /home/jenkins
	I0927 01:31:18.194588   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Checking permissions on dir: /home
	I0927 01:31:18.194595   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Skipping /home - not owner
	I0927 01:31:18.194634   64877 main.go:141] libmachine: (old-k8s-version-612261) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 01:31:18.194664   64877 main.go:141] libmachine: (old-k8s-version-612261) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 01:31:18.194680   64877 main.go:141] libmachine: (old-k8s-version-612261) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 01:31:18.194696   64877 main.go:141] libmachine: (old-k8s-version-612261) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 01:31:18.194710   64877 main.go:141] libmachine: (old-k8s-version-612261) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 01:31:18.194726   64877 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:31:18.196003   64877 main.go:141] libmachine: (old-k8s-version-612261) define libvirt domain using xml: 
	I0927 01:31:18.196027   64877 main.go:141] libmachine: (old-k8s-version-612261) <domain type='kvm'>
	I0927 01:31:18.196048   64877 main.go:141] libmachine: (old-k8s-version-612261)   <name>old-k8s-version-612261</name>
	I0927 01:31:18.196063   64877 main.go:141] libmachine: (old-k8s-version-612261)   <memory unit='MiB'>2200</memory>
	I0927 01:31:18.196076   64877 main.go:141] libmachine: (old-k8s-version-612261)   <vcpu>2</vcpu>
	I0927 01:31:18.196085   64877 main.go:141] libmachine: (old-k8s-version-612261)   <features>
	I0927 01:31:18.196099   64877 main.go:141] libmachine: (old-k8s-version-612261)     <acpi/>
	I0927 01:31:18.196109   64877 main.go:141] libmachine: (old-k8s-version-612261)     <apic/>
	I0927 01:31:18.196117   64877 main.go:141] libmachine: (old-k8s-version-612261)     <pae/>
	I0927 01:31:18.196123   64877 main.go:141] libmachine: (old-k8s-version-612261)     
	I0927 01:31:18.196136   64877 main.go:141] libmachine: (old-k8s-version-612261)   </features>
	I0927 01:31:18.196143   64877 main.go:141] libmachine: (old-k8s-version-612261)   <cpu mode='host-passthrough'>
	I0927 01:31:18.196155   64877 main.go:141] libmachine: (old-k8s-version-612261)   
	I0927 01:31:18.196164   64877 main.go:141] libmachine: (old-k8s-version-612261)   </cpu>
	I0927 01:31:18.196173   64877 main.go:141] libmachine: (old-k8s-version-612261)   <os>
	I0927 01:31:18.196183   64877 main.go:141] libmachine: (old-k8s-version-612261)     <type>hvm</type>
	I0927 01:31:18.196195   64877 main.go:141] libmachine: (old-k8s-version-612261)     <boot dev='cdrom'/>
	I0927 01:31:18.196205   64877 main.go:141] libmachine: (old-k8s-version-612261)     <boot dev='hd'/>
	I0927 01:31:18.196213   64877 main.go:141] libmachine: (old-k8s-version-612261)     <bootmenu enable='no'/>
	I0927 01:31:18.196221   64877 main.go:141] libmachine: (old-k8s-version-612261)   </os>
	I0927 01:31:18.196228   64877 main.go:141] libmachine: (old-k8s-version-612261)   <devices>
	I0927 01:31:18.196240   64877 main.go:141] libmachine: (old-k8s-version-612261)     <disk type='file' device='cdrom'>
	I0927 01:31:18.196253   64877 main.go:141] libmachine: (old-k8s-version-612261)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/boot2docker.iso'/>
	I0927 01:31:18.196263   64877 main.go:141] libmachine: (old-k8s-version-612261)       <target dev='hdc' bus='scsi'/>
	I0927 01:31:18.196271   64877 main.go:141] libmachine: (old-k8s-version-612261)       <readonly/>
	I0927 01:31:18.196290   64877 main.go:141] libmachine: (old-k8s-version-612261)     </disk>
	I0927 01:31:18.196303   64877 main.go:141] libmachine: (old-k8s-version-612261)     <disk type='file' device='disk'>
	I0927 01:31:18.196316   64877 main.go:141] libmachine: (old-k8s-version-612261)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 01:31:18.196332   64877 main.go:141] libmachine: (old-k8s-version-612261)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/old-k8s-version-612261.rawdisk'/>
	I0927 01:31:18.196340   64877 main.go:141] libmachine: (old-k8s-version-612261)       <target dev='hda' bus='virtio'/>
	I0927 01:31:18.196348   64877 main.go:141] libmachine: (old-k8s-version-612261)     </disk>
	I0927 01:31:18.196357   64877 main.go:141] libmachine: (old-k8s-version-612261)     <interface type='network'>
	I0927 01:31:18.196367   64877 main.go:141] libmachine: (old-k8s-version-612261)       <source network='mk-old-k8s-version-612261'/>
	I0927 01:31:18.196376   64877 main.go:141] libmachine: (old-k8s-version-612261)       <model type='virtio'/>
	I0927 01:31:18.196385   64877 main.go:141] libmachine: (old-k8s-version-612261)     </interface>
	I0927 01:31:18.196395   64877 main.go:141] libmachine: (old-k8s-version-612261)     <interface type='network'>
	I0927 01:31:18.196413   64877 main.go:141] libmachine: (old-k8s-version-612261)       <source network='default'/>
	I0927 01:31:18.196420   64877 main.go:141] libmachine: (old-k8s-version-612261)       <model type='virtio'/>
	I0927 01:31:18.196433   64877 main.go:141] libmachine: (old-k8s-version-612261)     </interface>
	I0927 01:31:18.196445   64877 main.go:141] libmachine: (old-k8s-version-612261)     <serial type='pty'>
	I0927 01:31:18.196453   64877 main.go:141] libmachine: (old-k8s-version-612261)       <target port='0'/>
	I0927 01:31:18.196460   64877 main.go:141] libmachine: (old-k8s-version-612261)     </serial>
	I0927 01:31:18.196472   64877 main.go:141] libmachine: (old-k8s-version-612261)     <console type='pty'>
	I0927 01:31:18.196482   64877 main.go:141] libmachine: (old-k8s-version-612261)       <target type='serial' port='0'/>
	I0927 01:31:18.196492   64877 main.go:141] libmachine: (old-k8s-version-612261)     </console>
	I0927 01:31:18.196499   64877 main.go:141] libmachine: (old-k8s-version-612261)     <rng model='virtio'>
	I0927 01:31:18.196509   64877 main.go:141] libmachine: (old-k8s-version-612261)       <backend model='random'>/dev/random</backend>
	I0927 01:31:18.196519   64877 main.go:141] libmachine: (old-k8s-version-612261)     </rng>
	I0927 01:31:18.196527   64877 main.go:141] libmachine: (old-k8s-version-612261)     
	I0927 01:31:18.196537   64877 main.go:141] libmachine: (old-k8s-version-612261)     
	I0927 01:31:18.196545   64877 main.go:141] libmachine: (old-k8s-version-612261)   </devices>
	I0927 01:31:18.196554   64877 main.go:141] libmachine: (old-k8s-version-612261) </domain>
	I0927 01:31:18.196565   64877 main.go:141] libmachine: (old-k8s-version-612261) 
	I0927 01:31:18.279141   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:bf:89:9f in network default
	I0927 01:31:18.279853   64877 main.go:141] libmachine: (old-k8s-version-612261) Ensuring networks are active...
	I0927 01:31:18.279885   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:18.280831   64877 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network default is active
	I0927 01:31:18.281321   64877 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network mk-old-k8s-version-612261 is active
	I0927 01:31:18.282035   64877 main.go:141] libmachine: (old-k8s-version-612261) Getting domain xml...
	I0927 01:31:18.282761   64877 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:31:19.804109   64877 main.go:141] libmachine: (old-k8s-version-612261) Waiting to get IP...
	I0927 01:31:19.804924   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:19.805368   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:19.805394   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:19.805348   65054 retry.go:31] will retry after 233.261285ms: waiting for machine to come up
	I0927 01:31:20.039858   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:20.040411   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:20.040439   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:20.040354   65054 retry.go:31] will retry after 353.295076ms: waiting for machine to come up
	I0927 01:31:20.394929   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:20.395538   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:20.395567   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:20.395474   65054 retry.go:31] will retry after 333.571623ms: waiting for machine to come up
	I0927 01:31:20.730811   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:20.731273   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:20.731310   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:20.731241   65054 retry.go:31] will retry after 415.534476ms: waiting for machine to come up
	I0927 01:31:21.148801   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:21.149238   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:21.149267   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:21.149203   65054 retry.go:31] will retry after 562.578322ms: waiting for machine to come up
	I0927 01:31:21.713422   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:21.713781   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:21.713823   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:21.713756   65054 retry.go:31] will retry after 628.513435ms: waiting for machine to come up
	I0927 01:31:22.343611   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:22.344144   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:22.344177   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:22.344089   65054 retry.go:31] will retry after 1.162650327s: waiting for machine to come up
	I0927 01:31:23.508330   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:23.508839   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:23.508864   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:23.508804   65054 retry.go:31] will retry after 1.479051053s: waiting for machine to come up
	I0927 01:31:24.989664   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:24.990344   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:24.990375   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:24.990271   65054 retry.go:31] will retry after 1.354016831s: waiting for machine to come up
	I0927 01:31:26.345624   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:26.346207   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:26.346234   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:26.346163   65054 retry.go:31] will retry after 1.515087295s: waiting for machine to come up
	I0927 01:31:27.862614   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:27.863078   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:27.863104   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:27.863035   65054 retry.go:31] will retry after 2.487539142s: waiting for machine to come up
	I0927 01:31:30.353037   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:30.353665   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:30.353695   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:30.353602   65054 retry.go:31] will retry after 2.427598006s: waiting for machine to come up
	I0927 01:31:32.783016   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:32.783658   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:32.783688   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:32.783600   65054 retry.go:31] will retry after 4.483955853s: waiting for machine to come up
	I0927 01:31:37.270860   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:37.271414   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:31:37.271436   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:31:37.271380   65054 retry.go:31] will retry after 5.400728659s: waiting for machine to come up
	I0927 01:31:42.673553   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:42.674084   64877 main.go:141] libmachine: (old-k8s-version-612261) Found IP for machine: 192.168.72.129
	I0927 01:31:42.674121   64877 main.go:141] libmachine: (old-k8s-version-612261) Reserving static IP address...
	I0927 01:31:42.674141   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has current primary IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:42.674531   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"} in network mk-old-k8s-version-612261
	I0927 01:31:42.753237   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Getting to WaitForSSH function...
	I0927 01:31:42.753260   64877 main.go:141] libmachine: (old-k8s-version-612261) Reserved static IP address: 192.168.72.129
	I0927 01:31:42.753270   64877 main.go:141] libmachine: (old-k8s-version-612261) Waiting for SSH to be available...
	I0927 01:31:42.755702   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:42.756130   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:42.756167   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:42.756245   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH client type: external
	I0927 01:31:42.756271   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa (-rw-------)
	I0927 01:31:42.756307   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:31:42.756333   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | About to run SSH command:
	I0927 01:31:42.756356   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | exit 0
	I0927 01:31:42.887573   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | SSH cmd err, output: <nil>: 
	I0927 01:31:42.887817   64877 main.go:141] libmachine: (old-k8s-version-612261) KVM machine creation complete!
	I0927 01:31:42.888152   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:31:42.888662   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:31:42.888874   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:31:42.888993   64877 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 01:31:42.889017   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetState
	I0927 01:31:42.890193   64877 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 01:31:42.890207   64877 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 01:31:42.890213   64877 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 01:31:42.890220   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:42.892688   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:42.893034   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:42.893055   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:42.893174   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:42.893497   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:42.893645   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:42.893758   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:42.893906   64877 main.go:141] libmachine: Using SSH client type: native
	I0927 01:31:42.894120   64877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:31:42.894132   64877 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 01:31:43.002596   64877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:31:43.002617   64877 main.go:141] libmachine: Detecting the provisioner...
	I0927 01:31:43.002624   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:43.005537   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.005903   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.005926   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.006160   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:43.006354   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.006506   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.006617   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:43.006754   64877 main.go:141] libmachine: Using SSH client type: native
	I0927 01:31:43.006957   64877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:31:43.006968   64877 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 01:31:43.115852   64877 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 01:31:43.115979   64877 main.go:141] libmachine: found compatible host: buildroot
	I0927 01:31:43.115991   64877 main.go:141] libmachine: Provisioning with buildroot...
	I0927 01:31:43.115998   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:31:43.116281   64877 buildroot.go:166] provisioning hostname "old-k8s-version-612261"
	I0927 01:31:43.116309   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:31:43.116492   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:43.119026   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.119391   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.119424   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.119550   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:43.119713   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.119901   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.120068   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:43.120255   64877 main.go:141] libmachine: Using SSH client type: native
	I0927 01:31:43.120415   64877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:31:43.120428   64877 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-612261 && echo "old-k8s-version-612261" | sudo tee /etc/hostname
	I0927 01:31:43.246862   64877 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-612261
	
	I0927 01:31:43.246942   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:43.250154   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.250489   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.250516   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.250708   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:43.250950   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.251106   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.251287   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:43.251460   64877 main.go:141] libmachine: Using SSH client type: native
	I0927 01:31:43.251675   64877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:31:43.251703   64877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-612261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-612261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-612261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:31:43.373427   64877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:31:43.373451   64877 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:31:43.373490   64877 buildroot.go:174] setting up certificates
	I0927 01:31:43.373500   64877 provision.go:84] configureAuth start
	I0927 01:31:43.373511   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:31:43.373808   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:31:43.376263   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.376646   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.376672   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.376864   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:43.379019   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.379371   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.379396   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.379559   64877 provision.go:143] copyHostCerts
	I0927 01:31:43.379617   64877 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:31:43.379628   64877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:31:43.379695   64877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:31:43.379805   64877 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:31:43.379813   64877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:31:43.379833   64877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:31:43.379897   64877 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:31:43.379906   64877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:31:43.379923   64877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:31:43.379983   64877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-612261 san=[127.0.0.1 192.168.72.129 localhost minikube old-k8s-version-612261]
	I0927 01:31:43.564123   64877 provision.go:177] copyRemoteCerts
	I0927 01:31:43.564197   64877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:31:43.564219   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:43.567076   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.567484   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.567513   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.567699   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:43.567864   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.567996   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:43.568110   64877 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:31:43.653915   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:31:43.681516   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:31:43.705826   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:31:43.729530   64877 provision.go:87] duration metric: took 356.017058ms to configureAuth
	I0927 01:31:43.729557   64877 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:31:43.729736   64877 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:31:43.729826   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:43.732453   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.732805   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.732832   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.733017   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:43.733210   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.733376   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.733559   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:43.733734   64877 main.go:141] libmachine: Using SSH client type: native
	I0927 01:31:43.733885   64877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:31:43.733906   64877 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:31:43.959791   64877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:31:43.959814   64877 main.go:141] libmachine: Checking connection to Docker...
	I0927 01:31:43.959823   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetURL
	I0927 01:31:43.961149   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using libvirt version 6000000
	I0927 01:31:43.963322   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.963758   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.963786   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.963957   64877 main.go:141] libmachine: Docker is up and running!
	I0927 01:31:43.963974   64877 main.go:141] libmachine: Reticulating splines...
	I0927 01:31:43.963982   64877 client.go:171] duration metric: took 26.362364403s to LocalClient.Create
	I0927 01:31:43.964009   64877 start.go:167] duration metric: took 26.362437384s to libmachine.API.Create "old-k8s-version-612261"
	I0927 01:31:43.964022   64877 start.go:293] postStartSetup for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:31:43.964037   64877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:31:43.964073   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:31:43.964296   64877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:31:43.964323   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:43.966456   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.966823   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:43.966850   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:43.967035   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:43.967232   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:43.967413   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:43.967588   64877 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:31:44.054185   64877 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:31:44.058695   64877 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:31:44.058723   64877 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:31:44.058801   64877 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:31:44.058898   64877 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:31:44.059027   64877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:31:44.068978   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:31:44.096274   64877 start.go:296] duration metric: took 132.236848ms for postStartSetup
	I0927 01:31:44.096350   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:31:44.097035   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:31:44.099665   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.099993   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:44.100035   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.100234   64877 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:31:44.100410   64877 start.go:128] duration metric: took 26.679871586s to createHost
	I0927 01:31:44.100430   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:44.102607   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.102933   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:44.102970   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.103082   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:44.103237   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:44.103384   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:44.103548   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:44.103668   64877 main.go:141] libmachine: Using SSH client type: native
	I0927 01:31:44.103862   64877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:31:44.103873   64877 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:31:44.212041   64877 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727400704.188973892
	
	I0927 01:31:44.212065   64877 fix.go:216] guest clock: 1727400704.188973892
	I0927 01:31:44.212073   64877 fix.go:229] Guest: 2024-09-27 01:31:44.188973892 +0000 UTC Remote: 2024-09-27 01:31:44.100421117 +0000 UTC m=+40.352689217 (delta=88.552775ms)
	I0927 01:31:44.212110   64877 fix.go:200] guest clock delta is within tolerance: 88.552775ms
	I0927 01:31:44.212116   64877 start.go:83] releasing machines lock for "old-k8s-version-612261", held for 26.791748771s
	I0927 01:31:44.212142   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:31:44.212393   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:31:44.215108   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.215526   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:44.215554   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.215745   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:31:44.216210   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:31:44.216388   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:31:44.216485   64877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:31:44.216536   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:44.216592   64877 ssh_runner.go:195] Run: cat /version.json
	I0927 01:31:44.216621   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:31:44.219281   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.219464   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.219644   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:44.219674   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.219905   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:44.219905   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:44.219990   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:44.220020   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:31:44.220138   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:44.220220   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:31:44.220600   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:44.220619   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:31:44.220944   64877 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:31:44.221003   64877 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:31:44.326174   64877 ssh_runner.go:195] Run: systemctl --version
	I0927 01:31:44.332762   64877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:31:44.496967   64877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:31:44.503606   64877 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:31:44.503660   64877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:31:44.520768   64877 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:31:44.520796   64877 start.go:495] detecting cgroup driver to use...
	I0927 01:31:44.520864   64877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:31:44.538496   64877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:31:44.551248   64877 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:31:44.551297   64877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:31:44.565400   64877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:31:44.579697   64877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:31:44.712797   64877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:31:44.875702   64877 docker.go:233] disabling docker service ...
	I0927 01:31:44.875768   64877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:31:44.890381   64877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:31:44.903802   64877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:31:45.032077   64877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:31:45.160934   64877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:31:45.176525   64877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:31:45.195544   64877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:31:45.195612   64877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:31:45.206832   64877 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:31:45.206908   64877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:31:45.218950   64877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:31:45.229355   64877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:31:45.239706   64877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:31:45.250444   64877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:31:45.260435   64877 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:31:45.260506   64877 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:31:45.274522   64877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:31:45.284807   64877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:31:45.417282   64877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:31:45.510344   64877 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:31:45.510411   64877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:31:45.515055   64877 start.go:563] Will wait 60s for crictl version
	I0927 01:31:45.515105   64877 ssh_runner.go:195] Run: which crictl
	I0927 01:31:45.518939   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:31:45.560046   64877 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:31:45.560164   64877 ssh_runner.go:195] Run: crio --version
	I0927 01:31:45.590270   64877 ssh_runner.go:195] Run: crio --version
	I0927 01:31:45.621483   64877 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:31:45.622771   64877 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:31:45.625674   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:45.625993   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:31:33 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:31:45.626028   64877 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:31:45.626191   64877 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 01:31:45.630262   64877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:31:45.643394   64877 kubeadm.go:883] updating cluster {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:31:45.643529   64877 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:31:45.643589   64877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:31:45.677310   64877 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:31:45.677378   64877 ssh_runner.go:195] Run: which lz4
	I0927 01:31:45.681420   64877 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:31:45.685475   64877 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:31:45.685504   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:31:47.309254   64877 crio.go:462] duration metric: took 1.627858421s to copy over tarball
	I0927 01:31:47.309330   64877 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:31:49.812668   64877 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.503309149s)
	I0927 01:31:49.812699   64877 crio.go:469] duration metric: took 2.503414086s to extract the tarball
	I0927 01:31:49.812710   64877 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:31:49.855869   64877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:31:49.901602   64877 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:31:49.901626   64877 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:31:49.901692   64877 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:31:49.901700   64877 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:31:49.901747   64877 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:31:49.901843   64877 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:31:49.901745   64877 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:31:49.901771   64877 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:31:49.901779   64877 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:31:49.901780   64877 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:31:49.902862   64877 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:31:49.902973   64877 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:31:49.903029   64877 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:31:49.903045   64877 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:31:49.903065   64877 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:31:49.903139   64877 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:31:49.903200   64877 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:31:49.903350   64877 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:31:50.067957   64877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:31:50.069389   64877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:31:50.079532   64877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:31:50.086147   64877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:31:50.092993   64877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:31:50.101212   64877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:31:50.109982   64877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:31:50.178928   64877 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:31:50.178963   64877 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:31:50.179007   64877 ssh_runner.go:195] Run: which crictl
	I0927 01:31:50.179192   64877 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:31:50.179222   64877 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:31:50.179255   64877 ssh_runner.go:195] Run: which crictl
	I0927 01:31:50.222120   64877 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:31:50.222154   64877 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:31:50.222191   64877 ssh_runner.go:195] Run: which crictl
	I0927 01:31:50.248671   64877 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:31:50.248779   64877 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:31:50.248832   64877 ssh_runner.go:195] Run: which crictl
	I0927 01:31:50.248722   64877 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:31:50.248875   64877 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:31:50.248927   64877 ssh_runner.go:195] Run: which crictl
	I0927 01:31:50.256018   64877 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:31:50.256054   64877 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:31:50.256096   64877 ssh_runner.go:195] Run: which crictl
	I0927 01:31:50.276388   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:31:50.276416   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:31:50.276491   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:31:50.276495   64877 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:31:50.276527   64877 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:31:50.276541   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:31:50.276554   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:31:50.276565   64877 ssh_runner.go:195] Run: which crictl
	I0927 01:31:50.276573   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:31:50.419572   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:31:50.419705   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:31:50.420094   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:31:50.495631   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:31:50.495631   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:31:50.495713   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:31:50.495724   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:31:50.495836   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:31:50.495857   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:31:50.495917   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:31:50.648010   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:31:50.648061   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:31:50.648121   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:31:50.648167   64877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:31:50.648200   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:31:50.648252   64877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:31:50.648288   64877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:31:50.765770   64877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:31:50.772260   64877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:31:50.772288   64877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:31:50.772365   64877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:31:50.772405   64877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:31:51.107128   64877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:31:51.252795   64877 cache_images.go:92] duration metric: took 1.351151665s to LoadCachedImages
	W0927 01:31:51.252883   64877 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0927 01:31:51.252900   64877 kubeadm.go:934] updating node { 192.168.72.129 8443 v1.20.0 crio true true} ...
	I0927 01:31:51.253039   64877 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-612261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:31:51.253120   64877 ssh_runner.go:195] Run: crio config
	I0927 01:31:51.304933   64877 cni.go:84] Creating CNI manager for ""
	I0927 01:31:51.304956   64877 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:31:51.304966   64877 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:31:51.304983   64877 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-612261 NodeName:old-k8s-version-612261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:31:51.305101   64877 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-612261"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:31:51.305165   64877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:31:51.315039   64877 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:31:51.315108   64877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:31:51.324915   64877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0927 01:31:51.342095   64877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:31:51.359144   64877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 01:31:51.377267   64877 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I0927 01:31:51.381542   64877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:31:51.394722   64877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:31:51.524849   64877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:31:51.542550   64877 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261 for IP: 192.168.72.129
	I0927 01:31:51.542578   64877 certs.go:194] generating shared ca certs ...
	I0927 01:31:51.542600   64877 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:51.542785   64877 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:31:51.542841   64877 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:31:51.542854   64877 certs.go:256] generating profile certs ...
	I0927 01:31:51.542928   64877 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key
	I0927 01:31:51.542946   64877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.crt with IP's: []
	I0927 01:31:51.624811   64877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.crt ...
	I0927 01:31:51.624853   64877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.crt: {Name:mk0e4166155e6a8bdfca20d3242a42c8d4bd7130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:51.625057   64877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key ...
	I0927 01:31:51.625073   64877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key: {Name:mk01a5590aef45deeab94585f1f47ca3e3b0da88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:51.625183   64877 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e
	I0927 01:31:51.625204   64877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt.a362196e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.129]
	I0927 01:31:51.706028   64877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt.a362196e ...
	I0927 01:31:51.706056   64877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt.a362196e: {Name:mkd51549172547db5289d41377e963dac7c3b497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:51.706238   64877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e ...
	I0927 01:31:51.706255   64877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e: {Name:mk9a0c748b181ab4657f15d70e36c30fac997205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:51.706344   64877 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt.a362196e -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt
	I0927 01:31:51.706460   64877 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key
	I0927 01:31:51.706538   64877 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key
	I0927 01:31:51.706559   64877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt with IP's: []
	I0927 01:31:51.773581   64877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt ...
	I0927 01:31:51.773611   64877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt: {Name:mk0183ef62bfecb368453d9c6f49e6f10190b44f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:51.794173   64877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key ...
	I0927 01:31:51.794220   64877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key: {Name:mk0d5a5600c34377e022e50a98290e38d9b02e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:31:51.794493   64877 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:31:51.794546   64877 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:31:51.794565   64877 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:31:51.794595   64877 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:31:51.794621   64877 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:31:51.794656   64877 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:31:51.794707   64877 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:31:51.795400   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:31:51.822353   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:31:51.846511   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:31:51.872207   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:31:51.897059   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:31:51.932087   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:31:51.959698   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:31:51.984974   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:31:52.011989   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:31:52.036604   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:31:52.062923   64877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:31:52.089536   64877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:31:52.106962   64877 ssh_runner.go:195] Run: openssl version
	I0927 01:31:52.113135   64877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:31:52.124825   64877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:31:52.129798   64877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:31:52.129856   64877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:31:52.136032   64877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:31:52.148099   64877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:31:52.160547   64877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:31:52.165804   64877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:31:52.165860   64877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:31:52.172143   64877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:31:52.184594   64877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:31:52.196329   64877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:31:52.201077   64877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:31:52.201145   64877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:31:52.206960   64877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:31:52.219385   64877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:31:52.223960   64877 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 01:31:52.224021   64877 kubeadm.go:392] StartCluster: {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:31:52.224098   64877 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:31:52.224162   64877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:31:52.266672   64877 cri.go:89] found id: ""
	I0927 01:31:52.266750   64877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:31:52.277761   64877 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:31:52.288015   64877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:31:52.298570   64877 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:31:52.298591   64877 kubeadm.go:157] found existing configuration files:
	
	I0927 01:31:52.298642   64877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:31:52.308579   64877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:31:52.308633   64877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:31:52.319062   64877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:31:52.329388   64877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:31:52.329456   64877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:31:52.340024   64877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:31:52.350224   64877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:31:52.350277   64877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:31:52.364718   64877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:31:52.375380   64877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:31:52.375443   64877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:31:52.386374   64877 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:31:52.532726   64877 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:31:52.532808   64877 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:31:52.725985   64877 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:31:52.726112   64877 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:31:52.726242   64877 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:31:52.926629   64877 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:31:52.963780   64877 out.go:235]   - Generating certificates and keys ...
	I0927 01:31:52.963895   64877 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:31:52.963967   64877 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:31:53.006940   64877 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 01:31:53.184655   64877 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 01:31:53.302103   64877 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 01:31:53.577995   64877 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 01:31:53.957719   64877 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 01:31:53.957919   64877 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-612261] and IPs [192.168.72.129 127.0.0.1 ::1]
	I0927 01:31:54.116971   64877 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 01:31:54.117217   64877 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-612261] and IPs [192.168.72.129 127.0.0.1 ::1]
	I0927 01:31:54.167766   64877 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 01:31:54.275865   64877 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 01:31:54.349952   64877 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 01:31:54.350381   64877 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:31:54.515208   64877 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:31:54.734962   64877 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:31:54.834012   64877 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:31:54.971481   64877 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:31:54.989694   64877 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:31:54.992989   64877 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:31:54.993069   64877 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:31:55.128641   64877 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:31:55.130541   64877 out.go:235]   - Booting up control plane ...
	I0927 01:31:55.130659   64877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:31:55.135571   64877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:31:55.136665   64877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:31:55.138370   64877 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:31:55.143163   64877 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:32:35.138565   64877 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:32:35.138684   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:32:35.138961   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:32:40.138719   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:32:40.138952   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:32:50.138160   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:32:50.138377   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:33:10.137681   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:33:10.137861   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:33:50.138876   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:33:50.139214   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:33:50.139242   64877 kubeadm.go:310] 
	I0927 01:33:50.139297   64877 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:33:50.139375   64877 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:33:50.139391   64877 kubeadm.go:310] 
	I0927 01:33:50.139443   64877 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:33:50.139488   64877 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:33:50.139640   64877 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:33:50.139669   64877 kubeadm.go:310] 
	I0927 01:33:50.139847   64877 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:33:50.139903   64877 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:33:50.139949   64877 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:33:50.139959   64877 kubeadm.go:310] 
	I0927 01:33:50.140086   64877 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:33:50.140192   64877 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:33:50.140201   64877 kubeadm.go:310] 
	I0927 01:33:50.140348   64877 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:33:50.140497   64877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:33:50.140595   64877 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:33:50.140687   64877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:33:50.140698   64877 kubeadm.go:310] 
	I0927 01:33:50.141305   64877 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:33:50.141425   64877 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:33:50.141524   64877 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:33:50.141655   64877 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-612261] and IPs [192.168.72.129 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-612261] and IPs [192.168.72.129 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-612261] and IPs [192.168.72.129 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-612261] and IPs [192.168.72.129 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:33:50.141703   64877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:33:51.513386   64877 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.371658213s)
	I0927 01:33:51.513488   64877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:33:51.528087   64877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:33:51.542064   64877 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:33:51.542086   64877 kubeadm.go:157] found existing configuration files:
	
	I0927 01:33:51.542137   64877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:33:51.552101   64877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:33:51.552172   64877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:33:51.563075   64877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:33:51.573112   64877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:33:51.573177   64877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:33:51.584265   64877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:33:51.594319   64877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:33:51.594397   64877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:33:51.605022   64877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:33:51.614711   64877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:33:51.614776   64877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:33:51.624575   64877 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:33:51.843787   64877 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:35:48.049028   64877 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:35:48.049273   64877 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:35:48.050546   64877 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:35:48.050597   64877 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:35:48.050665   64877 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:35:48.050776   64877 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:35:48.050869   64877 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:35:48.050930   64877 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:35:48.053474   64877 out.go:235]   - Generating certificates and keys ...
	I0927 01:35:48.053555   64877 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:35:48.053613   64877 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:35:48.053703   64877 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:35:48.053790   64877 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:35:48.053895   64877 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:35:48.053988   64877 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:35:48.054074   64877 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:35:48.054155   64877 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:35:48.054262   64877 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:35:48.054358   64877 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:35:48.054410   64877 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:35:48.054462   64877 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:35:48.054508   64877 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:35:48.054557   64877 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:35:48.054610   64877 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:35:48.054660   64877 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:35:48.054793   64877 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:35:48.054874   64877 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:35:48.054910   64877 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:35:48.055006   64877 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:35:48.056594   64877 out.go:235]   - Booting up control plane ...
	I0927 01:35:48.056702   64877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:35:48.056806   64877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:35:48.056896   64877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:35:48.056974   64877 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:35:48.057103   64877 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:35:48.057147   64877 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:35:48.057213   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:35:48.057412   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:35:48.057505   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:35:48.057691   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:35:48.057785   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:35:48.057980   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:35:48.058050   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:35:48.058221   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:35:48.058318   64877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:35:48.058511   64877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:35:48.058521   64877 kubeadm.go:310] 
	I0927 01:35:48.058587   64877 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:35:48.058650   64877 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:35:48.058659   64877 kubeadm.go:310] 
	I0927 01:35:48.058714   64877 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:35:48.058768   64877 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:35:48.058904   64877 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:35:48.058915   64877 kubeadm.go:310] 
	I0927 01:35:48.059023   64877 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:35:48.059058   64877 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:35:48.059086   64877 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:35:48.059092   64877 kubeadm.go:310] 
	I0927 01:35:48.059239   64877 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:35:48.059334   64877 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:35:48.059345   64877 kubeadm.go:310] 
	I0927 01:35:48.059476   64877 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:35:48.059559   64877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:35:48.059652   64877 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:35:48.059749   64877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:35:48.059795   64877 kubeadm.go:310] 
	I0927 01:35:48.059807   64877 kubeadm.go:394] duration metric: took 3m55.835788752s to StartCluster
	I0927 01:35:48.059863   64877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:35:48.059916   64877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:35:48.104181   64877 cri.go:89] found id: ""
	I0927 01:35:48.104204   64877 logs.go:276] 0 containers: []
	W0927 01:35:48.104212   64877 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:35:48.104218   64877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:35:48.104272   64877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:35:48.139007   64877 cri.go:89] found id: ""
	I0927 01:35:48.139036   64877 logs.go:276] 0 containers: []
	W0927 01:35:48.139054   64877 logs.go:278] No container was found matching "etcd"
	I0927 01:35:48.139062   64877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:35:48.139145   64877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:35:48.173904   64877 cri.go:89] found id: ""
	I0927 01:35:48.173936   64877 logs.go:276] 0 containers: []
	W0927 01:35:48.173947   64877 logs.go:278] No container was found matching "coredns"
	I0927 01:35:48.173955   64877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:35:48.174009   64877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:35:48.209136   64877 cri.go:89] found id: ""
	I0927 01:35:48.209162   64877 logs.go:276] 0 containers: []
	W0927 01:35:48.209174   64877 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:35:48.209182   64877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:35:48.209248   64877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:35:48.240858   64877 cri.go:89] found id: ""
	I0927 01:35:48.240881   64877 logs.go:276] 0 containers: []
	W0927 01:35:48.240889   64877 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:35:48.240896   64877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:35:48.240953   64877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:35:48.276854   64877 cri.go:89] found id: ""
	I0927 01:35:48.276885   64877 logs.go:276] 0 containers: []
	W0927 01:35:48.276896   64877 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:35:48.276904   64877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:35:48.276962   64877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:35:48.313943   64877 cri.go:89] found id: ""
	I0927 01:35:48.313963   64877 logs.go:276] 0 containers: []
	W0927 01:35:48.313971   64877 logs.go:278] No container was found matching "kindnet"
	I0927 01:35:48.313986   64877 logs.go:123] Gathering logs for dmesg ...
	I0927 01:35:48.314003   64877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:35:48.327641   64877 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:35:48.327667   64877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:35:48.440264   64877 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:35:48.440287   64877 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:35:48.440302   64877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:35:48.546384   64877 logs.go:123] Gathering logs for container status ...
	I0927 01:35:48.546419   64877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:35:48.587812   64877 logs.go:123] Gathering logs for kubelet ...
	I0927 01:35:48.587844   64877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 01:35:48.657267   64877 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:35:48.657354   64877 out.go:270] * 
	* 
	W0927 01:35:48.657422   64877 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:35:48.657440   64877 out.go:270] * 
	* 
	W0927 01:35:48.658237   64877 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:35:48.661857   64877 out.go:201] 
	W0927 01:35:48.663390   64877 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:35:48.663436   64877 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:35:48.663464   64877 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:35:48.665091   64877 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-612261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 6 (216.743044ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:35:48.922681   68500 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-612261" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (285.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-521072 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-521072 --alsologtostderr -v=3: exit status 82 (2m0.54199009s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-521072"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:33:23.344331   67140 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:33:23.344462   67140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:33:23.344472   67140 out.go:358] Setting ErrFile to fd 2...
	I0927 01:33:23.344479   67140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:33:23.344741   67140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:33:23.345031   67140 out.go:352] Setting JSON to false
	I0927 01:33:23.345134   67140 mustload.go:65] Loading cluster: no-preload-521072
	I0927 01:33:23.345680   67140 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:33:23.345770   67140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/config.json ...
	I0927 01:33:23.345990   67140 mustload.go:65] Loading cluster: no-preload-521072
	I0927 01:33:23.346148   67140 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:33:23.346178   67140 stop.go:39] StopHost: no-preload-521072
	I0927 01:33:23.346776   67140 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:33:23.346834   67140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:33:23.361670   67140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0927 01:33:23.362198   67140 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:33:23.362725   67140 main.go:141] libmachine: Using API Version  1
	I0927 01:33:23.362753   67140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:33:23.363082   67140 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:33:23.365720   67140 out.go:177] * Stopping node "no-preload-521072"  ...
	I0927 01:33:23.367169   67140 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 01:33:23.367210   67140 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:33:23.367508   67140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 01:33:23.367536   67140 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:33:23.370804   67140 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:33:23.371379   67140 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:32:15 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:33:23.371427   67140 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:33:23.371647   67140 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:33:23.371822   67140 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:33:23.372058   67140 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:33:23.372249   67140 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:33:23.493719   67140 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 01:33:23.564595   67140 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 01:33:23.637149   67140 main.go:141] libmachine: Stopping "no-preload-521072"...
	I0927 01:33:23.637199   67140 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:33:23.638674   67140 main.go:141] libmachine: (no-preload-521072) Calling .Stop
	I0927 01:33:23.642518   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 0/120
	I0927 01:33:24.644216   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 1/120
	I0927 01:33:25.645939   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 2/120
	I0927 01:33:26.648365   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 3/120
	I0927 01:33:27.649925   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 4/120
	I0927 01:33:28.652342   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 5/120
	I0927 01:33:29.653975   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 6/120
	I0927 01:33:30.655494   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 7/120
	I0927 01:33:31.657827   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 8/120
	I0927 01:33:32.659389   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 9/120
	I0927 01:33:33.661483   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 10/120
	I0927 01:33:34.662936   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 11/120
	I0927 01:33:35.664488   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 12/120
	I0927 01:33:36.665786   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 13/120
	I0927 01:33:37.667149   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 14/120
	I0927 01:33:38.668923   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 15/120
	I0927 01:33:39.670274   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 16/120
	I0927 01:33:40.672416   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 17/120
	I0927 01:33:41.673804   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 18/120
	I0927 01:33:42.675377   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 19/120
	I0927 01:33:43.677102   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 20/120
	I0927 01:33:44.678505   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 21/120
	I0927 01:33:45.680987   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 22/120
	I0927 01:33:46.682617   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 23/120
	I0927 01:33:47.684057   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 24/120
	I0927 01:33:48.685894   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 25/120
	I0927 01:33:49.687594   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 26/120
	I0927 01:33:50.688974   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 27/120
	I0927 01:33:51.690311   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 28/120
	I0927 01:33:52.691886   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 29/120
	I0927 01:33:53.693765   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 30/120
	I0927 01:33:54.695473   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 31/120
	I0927 01:33:55.698121   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 32/120
	I0927 01:33:56.700024   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 33/120
	I0927 01:33:57.701409   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 34/120
	I0927 01:33:58.703636   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 35/120
	I0927 01:33:59.705786   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 36/120
	I0927 01:34:00.707127   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 37/120
	I0927 01:34:01.708478   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 38/120
	I0927 01:34:02.710150   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 39/120
	I0927 01:34:03.712187   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 40/120
	I0927 01:34:04.714111   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 41/120
	I0927 01:34:05.715229   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 42/120
	I0927 01:34:06.716475   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 43/120
	I0927 01:34:07.718259   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 44/120
	I0927 01:34:08.719822   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 45/120
	I0927 01:34:09.721256   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 46/120
	I0927 01:34:10.722833   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 47/120
	I0927 01:34:11.724112   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 48/120
	I0927 01:34:12.725931   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 49/120
	I0927 01:34:13.727973   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 50/120
	I0927 01:34:14.729741   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 51/120
	I0927 01:34:15.731193   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 52/120
	I0927 01:34:16.732726   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 53/120
	I0927 01:34:17.734647   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 54/120
	I0927 01:34:18.736650   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 55/120
	I0927 01:34:19.739248   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 56/120
	I0927 01:34:20.740647   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 57/120
	I0927 01:34:21.741909   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 58/120
	I0927 01:34:22.743497   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 59/120
	I0927 01:34:23.745369   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 60/120
	I0927 01:34:24.746897   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 61/120
	I0927 01:34:25.748486   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 62/120
	I0927 01:34:26.750627   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 63/120
	I0927 01:34:27.752065   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 64/120
	I0927 01:34:28.753676   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 65/120
	I0927 01:34:29.755166   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 66/120
	I0927 01:34:30.756381   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 67/120
	I0927 01:34:31.758013   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 68/120
	I0927 01:34:32.759262   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 69/120
	I0927 01:34:33.761162   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 70/120
	I0927 01:34:34.762521   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 71/120
	I0927 01:34:35.763995   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 72/120
	I0927 01:34:36.765610   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 73/120
	I0927 01:34:37.767674   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 74/120
	I0927 01:34:38.769633   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 75/120
	I0927 01:34:39.770864   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 76/120
	I0927 01:34:40.772226   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 77/120
	I0927 01:34:41.773545   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 78/120
	I0927 01:34:42.774739   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 79/120
	I0927 01:34:43.776679   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 80/120
	I0927 01:34:44.777982   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 81/120
	I0927 01:34:45.779362   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 82/120
	I0927 01:34:46.780752   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 83/120
	I0927 01:34:47.782021   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 84/120
	I0927 01:34:48.784050   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 85/120
	I0927 01:34:49.785320   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 86/120
	I0927 01:34:50.786653   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 87/120
	I0927 01:34:51.787946   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 88/120
	I0927 01:34:52.789366   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 89/120
	I0927 01:34:53.791658   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 90/120
	I0927 01:34:54.793785   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 91/120
	I0927 01:34:55.795103   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 92/120
	I0927 01:34:56.796375   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 93/120
	I0927 01:34:57.797688   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 94/120
	I0927 01:34:58.799551   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 95/120
	I0927 01:34:59.801884   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 96/120
	I0927 01:35:00.803319   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 97/120
	I0927 01:35:01.804913   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 98/120
	I0927 01:35:02.806358   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 99/120
	I0927 01:35:03.808851   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 100/120
	I0927 01:35:04.810373   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 101/120
	I0927 01:35:05.811629   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 102/120
	I0927 01:35:06.813145   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 103/120
	I0927 01:35:07.814209   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 104/120
	I0927 01:35:08.816068   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 105/120
	I0927 01:35:09.817390   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 106/120
	I0927 01:35:10.818473   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 107/120
	I0927 01:35:11.819928   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 108/120
	I0927 01:35:12.821196   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 109/120
	I0927 01:35:13.823184   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 110/120
	I0927 01:35:14.824657   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 111/120
	I0927 01:35:15.825774   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 112/120
	I0927 01:35:16.827205   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 113/120
	I0927 01:35:17.828710   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 114/120
	I0927 01:35:18.830716   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 115/120
	I0927 01:35:19.832072   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 116/120
	I0927 01:35:20.833609   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 117/120
	I0927 01:35:21.834788   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 118/120
	I0927 01:35:22.836098   67140 main.go:141] libmachine: (no-preload-521072) Waiting for machine to stop 119/120
	I0927 01:35:23.837234   67140 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 01:35:23.837302   67140 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0927 01:35:23.839375   67140 out.go:201] 
	W0927 01:35:23.840636   67140 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0927 01:35:23.840650   67140 out.go:270] * 
	* 
	W0927 01:35:23.843254   67140 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:35:23.844434   67140 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-521072 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072: exit status 3 (18.657566s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:35:42.503584   68309 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.246:22: connect: no route to host
	E0927 01:35:42.503606   68309 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-521072" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-245911 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-245911 --alsologtostderr -v=3: exit status 82 (2m0.498829502s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-245911"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:34:54.133919   68057 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:34:54.134034   68057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:34:54.134045   68057 out.go:358] Setting ErrFile to fd 2...
	I0927 01:34:54.134052   68057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:34:54.134297   68057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:34:54.134587   68057 out.go:352] Setting JSON to false
	I0927 01:34:54.134686   68057 mustload.go:65] Loading cluster: embed-certs-245911
	I0927 01:34:54.135056   68057 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:34:54.135146   68057 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/config.json ...
	I0927 01:34:54.135346   68057 mustload.go:65] Loading cluster: embed-certs-245911
	I0927 01:34:54.135459   68057 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:34:54.135491   68057 stop.go:39] StopHost: embed-certs-245911
	I0927 01:34:54.135815   68057 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:34:54.135863   68057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:34:54.150356   68057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0927 01:34:54.150779   68057 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:34:54.151433   68057 main.go:141] libmachine: Using API Version  1
	I0927 01:34:54.151458   68057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:34:54.151799   68057 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:34:54.154248   68057 out.go:177] * Stopping node "embed-certs-245911"  ...
	I0927 01:34:54.156005   68057 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 01:34:54.156041   68057 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:34:54.156321   68057 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 01:34:54.156348   68057 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:34:54.159610   68057 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:34:54.160115   68057 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:33:31 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:34:54.160136   68057 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:34:54.160357   68057 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:34:54.160549   68057 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:34:54.160698   68057 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:34:54.160819   68057 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:34:54.264945   68057 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 01:34:54.328304   68057 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 01:34:54.386760   68057 main.go:141] libmachine: Stopping "embed-certs-245911"...
	I0927 01:34:54.386797   68057 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:34:54.388740   68057 main.go:141] libmachine: (embed-certs-245911) Calling .Stop
	I0927 01:34:54.392891   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 0/120
	I0927 01:34:55.394172   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 1/120
	I0927 01:34:56.395486   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 2/120
	I0927 01:34:57.397648   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 3/120
	I0927 01:34:58.399075   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 4/120
	I0927 01:34:59.401295   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 5/120
	I0927 01:35:00.402986   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 6/120
	I0927 01:35:01.404720   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 7/120
	I0927 01:35:02.406058   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 8/120
	I0927 01:35:03.407652   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 9/120
	I0927 01:35:04.408967   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 10/120
	I0927 01:35:05.410362   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 11/120
	I0927 01:35:06.411673   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 12/120
	I0927 01:35:07.413128   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 13/120
	I0927 01:35:08.414333   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 14/120
	I0927 01:35:09.416318   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 15/120
	I0927 01:35:10.417848   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 16/120
	I0927 01:35:11.419162   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 17/120
	I0927 01:35:12.420595   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 18/120
	I0927 01:35:13.421928   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 19/120
	I0927 01:35:14.424105   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 20/120
	I0927 01:35:15.425494   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 21/120
	I0927 01:35:16.426838   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 22/120
	I0927 01:35:17.428390   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 23/120
	I0927 01:35:18.429628   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 24/120
	I0927 01:35:19.431550   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 25/120
	I0927 01:35:20.432984   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 26/120
	I0927 01:35:21.434248   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 27/120
	I0927 01:35:22.435729   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 28/120
	I0927 01:35:23.437754   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 29/120
	I0927 01:35:24.439662   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 30/120
	I0927 01:35:25.441194   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 31/120
	I0927 01:35:26.442397   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 32/120
	I0927 01:35:27.443686   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 33/120
	I0927 01:35:28.444983   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 34/120
	I0927 01:35:29.446933   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 35/120
	I0927 01:35:30.448484   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 36/120
	I0927 01:35:31.449889   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 37/120
	I0927 01:35:32.451260   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 38/120
	I0927 01:35:33.452578   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 39/120
	I0927 01:35:34.454874   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 40/120
	I0927 01:35:35.456193   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 41/120
	I0927 01:35:36.457594   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 42/120
	I0927 01:35:37.458884   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 43/120
	I0927 01:35:38.460451   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 44/120
	I0927 01:35:39.462555   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 45/120
	I0927 01:35:40.463827   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 46/120
	I0927 01:35:41.465223   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 47/120
	I0927 01:35:42.466623   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 48/120
	I0927 01:35:43.467886   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 49/120
	I0927 01:35:44.470163   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 50/120
	I0927 01:35:45.471492   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 51/120
	I0927 01:35:46.472757   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 52/120
	I0927 01:35:47.474082   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 53/120
	I0927 01:35:48.475914   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 54/120
	I0927 01:35:49.477069   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 55/120
	I0927 01:35:50.478301   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 56/120
	I0927 01:35:51.479605   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 57/120
	I0927 01:35:52.481031   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 58/120
	I0927 01:35:53.482483   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 59/120
	I0927 01:35:54.484630   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 60/120
	I0927 01:35:55.486201   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 61/120
	I0927 01:35:56.487700   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 62/120
	I0927 01:35:57.489504   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 63/120
	I0927 01:35:58.490729   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 64/120
	I0927 01:35:59.492611   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 65/120
	I0927 01:36:00.494190   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 66/120
	I0927 01:36:01.495505   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 67/120
	I0927 01:36:02.496886   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 68/120
	I0927 01:36:03.498166   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 69/120
	I0927 01:36:04.499484   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 70/120
	I0927 01:36:05.500917   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 71/120
	I0927 01:36:06.502173   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 72/120
	I0927 01:36:07.503608   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 73/120
	I0927 01:36:08.504949   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 74/120
	I0927 01:36:09.506856   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 75/120
	I0927 01:36:10.508309   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 76/120
	I0927 01:36:11.509660   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 77/120
	I0927 01:36:12.511013   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 78/120
	I0927 01:36:13.512685   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 79/120
	I0927 01:36:14.514875   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 80/120
	I0927 01:36:15.516188   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 81/120
	I0927 01:36:16.517610   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 82/120
	I0927 01:36:17.519032   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 83/120
	I0927 01:36:18.520360   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 84/120
	I0927 01:36:19.522387   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 85/120
	I0927 01:36:20.523747   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 86/120
	I0927 01:36:21.525231   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 87/120
	I0927 01:36:22.526788   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 88/120
	I0927 01:36:23.528231   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 89/120
	I0927 01:36:24.529723   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 90/120
	I0927 01:36:25.531553   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 91/120
	I0927 01:36:26.533010   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 92/120
	I0927 01:36:27.534227   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 93/120
	I0927 01:36:28.535573   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 94/120
	I0927 01:36:29.537273   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 95/120
	I0927 01:36:30.538880   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 96/120
	I0927 01:36:31.540334   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 97/120
	I0927 01:36:32.541709   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 98/120
	I0927 01:36:33.543049   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 99/120
	I0927 01:36:34.545228   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 100/120
	I0927 01:36:35.546729   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 101/120
	I0927 01:36:36.548285   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 102/120
	I0927 01:36:37.549768   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 103/120
	I0927 01:36:38.551595   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 104/120
	I0927 01:36:39.553633   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 105/120
	I0927 01:36:40.555221   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 106/120
	I0927 01:36:41.556632   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 107/120
	I0927 01:36:42.558099   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 108/120
	I0927 01:36:43.559503   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 109/120
	I0927 01:36:44.560929   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 110/120
	I0927 01:36:45.562480   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 111/120
	I0927 01:36:46.564297   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 112/120
	I0927 01:36:47.565957   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 113/120
	I0927 01:36:48.567270   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 114/120
	I0927 01:36:49.569132   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 115/120
	I0927 01:36:50.570507   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 116/120
	I0927 01:36:51.571792   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 117/120
	I0927 01:36:52.573773   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 118/120
	I0927 01:36:53.575036   68057 main.go:141] libmachine: (embed-certs-245911) Waiting for machine to stop 119/120
	I0927 01:36:54.575841   68057 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 01:36:54.575905   68057 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0927 01:36:54.577720   68057 out.go:201] 
	W0927 01:36:54.579057   68057 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0927 01:36:54.579101   68057 out.go:270] * 
	* 
	W0927 01:36:54.581596   68057 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:36:54.582914   68057 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-245911 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911: exit status 3 (18.54257975s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:37:13.127631   68936 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.158:22: connect: no route to host
	E0927 01:37:13.127652   68936 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.158:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-245911" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-368295 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-368295 --alsologtostderr -v=3: exit status 82 (2m0.527950752s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-368295"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:35:16.943180   68259 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:35:16.943345   68259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:35:16.943356   68259 out.go:358] Setting ErrFile to fd 2...
	I0927 01:35:16.943361   68259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:35:16.943578   68259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:35:16.943844   68259 out.go:352] Setting JSON to false
	I0927 01:35:16.943936   68259 mustload.go:65] Loading cluster: default-k8s-diff-port-368295
	I0927 01:35:16.944322   68259 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:35:16.944403   68259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:35:16.944593   68259 mustload.go:65] Loading cluster: default-k8s-diff-port-368295
	I0927 01:35:16.944728   68259 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:35:16.944764   68259 stop.go:39] StopHost: default-k8s-diff-port-368295
	I0927 01:35:16.945302   68259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:35:16.945354   68259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:35:16.960872   68259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0927 01:35:16.961365   68259 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:35:16.961912   68259 main.go:141] libmachine: Using API Version  1
	I0927 01:35:16.961938   68259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:35:16.962293   68259 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:35:16.964760   68259 out.go:177] * Stopping node "default-k8s-diff-port-368295"  ...
	I0927 01:35:16.966314   68259 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 01:35:16.966361   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:35:16.966607   68259 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 01:35:16.966631   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:35:16.969495   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:35:16.969930   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:33:58 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:35:16.969959   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:35:16.970141   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:35:16.970336   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:35:16.970508   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:35:16.970664   68259 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:35:17.087557   68259 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 01:35:17.165964   68259 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 01:35:17.229000   68259 main.go:141] libmachine: Stopping "default-k8s-diff-port-368295"...
	I0927 01:35:17.229029   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:35:17.230590   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Stop
	I0927 01:35:17.233889   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 0/120
	I0927 01:35:18.235370   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 1/120
	I0927 01:35:19.236731   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 2/120
	I0927 01:35:20.238100   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 3/120
	I0927 01:35:21.239239   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 4/120
	I0927 01:35:22.241119   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 5/120
	I0927 01:35:23.242470   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 6/120
	I0927 01:35:24.243851   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 7/120
	I0927 01:35:25.245173   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 8/120
	I0927 01:35:26.246481   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 9/120
	I0927 01:35:27.247788   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 10/120
	I0927 01:35:28.249042   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 11/120
	I0927 01:35:29.250426   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 12/120
	I0927 01:35:30.251760   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 13/120
	I0927 01:35:31.253273   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 14/120
	I0927 01:35:32.255081   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 15/120
	I0927 01:35:33.256360   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 16/120
	I0927 01:35:34.257776   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 17/120
	I0927 01:35:35.259091   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 18/120
	I0927 01:35:36.260463   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 19/120
	I0927 01:35:37.262442   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 20/120
	I0927 01:35:38.263796   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 21/120
	I0927 01:35:39.265220   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 22/120
	I0927 01:35:40.266603   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 23/120
	I0927 01:35:41.268036   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 24/120
	I0927 01:35:42.269865   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 25/120
	I0927 01:35:43.271207   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 26/120
	I0927 01:35:44.272741   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 27/120
	I0927 01:35:45.274178   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 28/120
	I0927 01:35:46.275570   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 29/120
	I0927 01:35:47.277736   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 30/120
	I0927 01:35:48.279246   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 31/120
	I0927 01:35:49.280218   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 32/120
	I0927 01:35:50.281830   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 33/120
	I0927 01:35:51.283114   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 34/120
	I0927 01:35:52.284704   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 35/120
	I0927 01:35:53.286227   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 36/120
	I0927 01:35:54.288411   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 37/120
	I0927 01:35:55.289671   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 38/120
	I0927 01:35:56.291154   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 39/120
	I0927 01:35:57.293435   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 40/120
	I0927 01:35:58.294848   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 41/120
	I0927 01:35:59.296282   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 42/120
	I0927 01:36:00.297818   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 43/120
	I0927 01:36:01.299229   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 44/120
	I0927 01:36:02.300450   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 45/120
	I0927 01:36:03.301944   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 46/120
	I0927 01:36:04.303331   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 47/120
	I0927 01:36:05.305021   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 48/120
	I0927 01:36:06.306341   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 49/120
	I0927 01:36:07.308641   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 50/120
	I0927 01:36:08.309882   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 51/120
	I0927 01:36:09.311617   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 52/120
	I0927 01:36:10.313235   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 53/120
	I0927 01:36:11.314591   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 54/120
	I0927 01:36:12.316930   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 55/120
	I0927 01:36:13.318317   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 56/120
	I0927 01:36:14.319810   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 57/120
	I0927 01:36:15.321130   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 58/120
	I0927 01:36:16.322661   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 59/120
	I0927 01:36:17.325112   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 60/120
	I0927 01:36:18.326612   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 61/120
	I0927 01:36:19.328403   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 62/120
	I0927 01:36:20.329877   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 63/120
	I0927 01:36:21.331175   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 64/120
	I0927 01:36:22.333052   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 65/120
	I0927 01:36:23.334344   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 66/120
	I0927 01:36:24.335695   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 67/120
	I0927 01:36:25.337823   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 68/120
	I0927 01:36:26.339369   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 69/120
	I0927 01:36:27.341727   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 70/120
	I0927 01:36:28.343291   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 71/120
	I0927 01:36:29.344863   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 72/120
	I0927 01:36:30.346244   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 73/120
	I0927 01:36:31.347885   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 74/120
	I0927 01:36:32.349866   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 75/120
	I0927 01:36:33.351275   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 76/120
	I0927 01:36:34.352764   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 77/120
	I0927 01:36:35.354011   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 78/120
	I0927 01:36:36.355562   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 79/120
	I0927 01:36:37.357767   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 80/120
	I0927 01:36:38.359385   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 81/120
	I0927 01:36:39.360812   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 82/120
	I0927 01:36:40.362345   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 83/120
	I0927 01:36:41.364064   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 84/120
	I0927 01:36:42.365979   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 85/120
	I0927 01:36:43.367420   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 86/120
	I0927 01:36:44.368772   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 87/120
	I0927 01:36:45.370215   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 88/120
	I0927 01:36:46.371740   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 89/120
	I0927 01:36:47.373779   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 90/120
	I0927 01:36:48.375235   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 91/120
	I0927 01:36:49.376675   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 92/120
	I0927 01:36:50.378040   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 93/120
	I0927 01:36:51.379660   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 94/120
	I0927 01:36:52.381707   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 95/120
	I0927 01:36:53.383227   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 96/120
	I0927 01:36:54.384783   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 97/120
	I0927 01:36:55.386218   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 98/120
	I0927 01:36:56.387559   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 99/120
	I0927 01:36:57.389667   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 100/120
	I0927 01:36:58.391235   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 101/120
	I0927 01:36:59.392621   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 102/120
	I0927 01:37:00.394065   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 103/120
	I0927 01:37:01.395487   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 104/120
	I0927 01:37:02.397586   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 105/120
	I0927 01:37:03.398916   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 106/120
	I0927 01:37:04.400397   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 107/120
	I0927 01:37:05.401751   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 108/120
	I0927 01:37:06.403172   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 109/120
	I0927 01:37:07.404769   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 110/120
	I0927 01:37:08.406166   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 111/120
	I0927 01:37:09.407664   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 112/120
	I0927 01:37:10.408986   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 113/120
	I0927 01:37:11.410460   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 114/120
	I0927 01:37:12.412537   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 115/120
	I0927 01:37:13.414019   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 116/120
	I0927 01:37:14.415513   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 117/120
	I0927 01:37:15.417027   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 118/120
	I0927 01:37:16.418546   68259 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for machine to stop 119/120
	I0927 01:37:17.420062   68259 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 01:37:17.420118   68259 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0927 01:37:17.422337   68259 out.go:201] 
	W0927 01:37:17.423638   68259 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0927 01:37:17.423656   68259 out.go:270] * 
	* 
	W0927 01:37:17.426494   68259 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:37:17.427790   68259 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-368295 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295: exit status 3 (18.481927285s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:37:35.911589   69096 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host
	E0927 01:37:35.911611   69096 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-368295" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072: exit status 3 (3.168033228s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:35:45.671610   68407 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.246:22: connect: no route to host
	E0927 01:35:45.671628   68407 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.246:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-521072 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-521072 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153418423s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.246:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-521072 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072: exit status 3 (3.062217044s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:35:54.887637   68630 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.246:22: connect: no route to host
	E0927 01:35:54.887656   68630 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-521072" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-612261 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-612261 create -f testdata/busybox.yaml: exit status 1 (41.697329ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-612261" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-612261 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 6 (214.120107ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:35:49.179188   68541 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-612261" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 6 (215.220947ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:35:49.394746   68571 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-612261" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (91.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-612261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-612261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m31.166723407s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-612261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-612261 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-612261 describe deploy/metrics-server -n kube-system: exit status 1 (43.104002ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-612261" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-612261 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 6 (226.135571ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:37:20.830508   69138 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-612261" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (91.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911: exit status 3 (3.167610746s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:37:16.295849   69031 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.158:22: connect: no route to host
	E0927 01:37:16.295874   69031 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.158:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-245911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-245911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153014886s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.158:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-245911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911: exit status 3 (3.062528486s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:37:25.511664   69204 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.158:22: connect: no route to host
	E0927 01:37:25.511685   69204 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.158:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-245911" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (716.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-612261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-612261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m52.46645713s)

                                                
                                                
-- stdout --
	* [old-k8s-version-612261] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-612261" primary control-plane node in "old-k8s-version-612261" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-612261" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:37:27.347097   69333 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:37:27.347367   69333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:27.347376   69333 out.go:358] Setting ErrFile to fd 2...
	I0927 01:37:27.347381   69333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:27.347563   69333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:37:27.348082   69333 out.go:352] Setting JSON to false
	I0927 01:37:27.348933   69333 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8392,"bootTime":1727392655,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:37:27.349026   69333 start.go:139] virtualization: kvm guest
	I0927 01:37:27.351015   69333 out.go:177] * [old-k8s-version-612261] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:37:27.352193   69333 notify.go:220] Checking for updates...
	I0927 01:37:27.352204   69333 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:37:27.353481   69333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:37:27.354619   69333 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:37:27.355821   69333 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:37:27.357025   69333 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:37:27.358129   69333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:37:27.359859   69333 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:37:27.360280   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:27.360350   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:27.375858   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0927 01:37:27.376266   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:27.376857   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:37:27.376881   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:27.377202   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:27.377389   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:37:27.379169   69333 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0927 01:37:27.380212   69333 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:37:27.380534   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:27.380568   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:27.395145   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0927 01:37:27.395603   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:27.396036   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:37:27.396059   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:27.396364   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:27.396548   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:37:27.431010   69333 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:37:27.432279   69333 start.go:297] selected driver: kvm2
	I0927 01:37:27.432291   69333 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:27.432389   69333 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:37:27.433048   69333 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:27.433140   69333 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:37:27.448055   69333 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:37:27.448479   69333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:37:27.448510   69333 cni.go:84] Creating CNI manager for ""
	I0927 01:37:27.448563   69333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:37:27.448606   69333 start.go:340] cluster config:
	{Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:27.448744   69333 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:27.450665   69333 out.go:177] * Starting "old-k8s-version-612261" primary control-plane node in "old-k8s-version-612261" cluster
	I0927 01:37:27.451813   69333 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:37:27.451848   69333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0927 01:37:27.451860   69333 cache.go:56] Caching tarball of preloaded images
	I0927 01:37:27.451937   69333 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:37:27.451952   69333 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0927 01:37:27.452054   69333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:37:27.452237   69333 start.go:360] acquireMachinesLock for old-k8s-version-612261: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:40:52.800240   69333 start.go:364] duration metric: took 3m25.347970249s to acquireMachinesLock for "old-k8s-version-612261"
	I0927 01:40:52.800310   69333 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:52.800317   69333 fix.go:54] fixHost starting: 
	I0927 01:40:52.800742   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:52.800800   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:52.818217   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0927 01:40:52.818644   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:52.819065   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:40:52.819086   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:52.819408   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:52.819544   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:40:52.819646   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetState
	I0927 01:40:52.820921   69333 fix.go:112] recreateIfNeeded on old-k8s-version-612261: state=Stopped err=<nil>
	I0927 01:40:52.820956   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	W0927 01:40:52.821110   69333 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:52.823209   69333 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-612261" ...
	I0927 01:40:52.824650   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .Start
	I0927 01:40:52.824802   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring networks are active...
	I0927 01:40:52.825590   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network default is active
	I0927 01:40:52.825908   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network mk-old-k8s-version-612261 is active
	I0927 01:40:52.826326   69333 main.go:141] libmachine: (old-k8s-version-612261) Getting domain xml...
	I0927 01:40:52.827108   69333 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:40:54.071322   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting to get IP...
	I0927 01:40:54.072357   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.072756   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.072821   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.072738   70279 retry.go:31] will retry after 264.648837ms: waiting for machine to come up
	I0927 01:40:54.339366   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.339799   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.339827   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.339731   70279 retry.go:31] will retry after 343.432635ms: waiting for machine to come up
	I0927 01:40:54.685260   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.685746   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.685780   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.685714   70279 retry.go:31] will retry after 455.276623ms: waiting for machine to come up
	I0927 01:40:55.142206   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.142679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.142708   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.142637   70279 retry.go:31] will retry after 419.074502ms: waiting for machine to come up
	I0927 01:40:55.563324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.565342   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.565368   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.565287   70279 retry.go:31] will retry after 587.161471ms: waiting for machine to come up
	I0927 01:40:56.154584   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.155182   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.155220   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.155109   70279 retry.go:31] will retry after 782.426926ms: waiting for machine to come up
	I0927 01:40:56.938784   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.939201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.939228   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.939132   70279 retry.go:31] will retry after 781.231902ms: waiting for machine to come up
	I0927 01:40:57.722147   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:57.722637   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:57.722657   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:57.722593   70279 retry.go:31] will retry after 1.223133601s: waiting for machine to come up
	I0927 01:40:58.947836   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:58.948362   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:58.948388   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:58.948326   70279 retry.go:31] will retry after 1.155368003s: waiting for machine to come up
	I0927 01:41:00.105812   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:00.106288   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:00.106356   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:00.106280   70279 retry.go:31] will retry after 2.324904017s: waiting for machine to come up
	I0927 01:41:02.432597   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:02.433066   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:02.433096   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:02.433026   70279 retry.go:31] will retry after 2.598889471s: waiting for machine to come up
	I0927 01:41:05.034614   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:05.035001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:05.035023   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:05.034973   70279 retry.go:31] will retry after 3.064943329s: waiting for machine to come up
	I0927 01:41:08.101834   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:08.102324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:08.102358   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:08.102283   70279 retry.go:31] will retry after 4.242138543s: waiting for machine to come up
	I0927 01:41:12.347378   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.347831   69333 main.go:141] libmachine: (old-k8s-version-612261) Found IP for machine: 192.168.72.129
	I0927 01:41:12.347855   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserving static IP address...
	I0927 01:41:12.347872   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has current primary IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.348468   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.348494   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserved static IP address: 192.168.72.129
	I0927 01:41:12.348507   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | skip adding static IP to network mk-old-k8s-version-612261 - found existing host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"}
	I0927 01:41:12.348518   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting for SSH to be available...
	I0927 01:41:12.348537   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Getting to WaitForSSH function...
	I0927 01:41:12.350917   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351287   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.351335   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351464   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH client type: external
	I0927 01:41:12.351485   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa (-rw-------)
	I0927 01:41:12.351516   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:12.351525   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | About to run SSH command:
	I0927 01:41:12.351533   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | exit 0
	I0927 01:41:12.471347   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:12.471724   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:41:12.472352   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.474886   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475299   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.475340   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475628   69333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:41:12.475857   69333 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:12.475879   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:12.476115   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.478594   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.478918   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.478945   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.479126   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.479340   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479536   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479695   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.479859   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.480093   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.480116   69333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:12.579536   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:12.579562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579785   69333 buildroot.go:166] provisioning hostname "old-k8s-version-612261"
	I0927 01:41:12.579798   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579965   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.582679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.583027   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583166   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.583372   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583727   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.583924   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.584169   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.584187   69333 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-612261 && echo "old-k8s-version-612261" | sudo tee /etc/hostname
	I0927 01:41:12.702223   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-612261
	
	I0927 01:41:12.702252   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.705201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705564   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.705601   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705817   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.706012   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706154   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706344   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.706538   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.706720   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.706738   69333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-612261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-612261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-612261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:12.816316   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:12.816343   69333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:12.816376   69333 buildroot.go:174] setting up certificates
	I0927 01:41:12.816386   69333 provision.go:84] configureAuth start
	I0927 01:41:12.816394   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.816678   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.819190   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819487   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.819511   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.821843   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822166   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.822203   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822382   69333 provision.go:143] copyHostCerts
	I0927 01:41:12.822453   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:12.822466   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:12.822533   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:12.822641   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:12.822650   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:12.822682   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:12.822756   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:12.822766   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:12.822792   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:12.822859   69333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-612261 san=[127.0.0.1 192.168.72.129 localhost minikube old-k8s-version-612261]
	I0927 01:41:13.054632   69333 provision.go:177] copyRemoteCerts
	I0927 01:41:13.054706   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:13.054740   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.057895   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058296   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.058329   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058478   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.058696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.058907   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.059062   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.146378   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:13.176435   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:41:13.208974   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 01:41:13.240179   69333 provision.go:87] duration metric: took 423.77487ms to configureAuth
	I0927 01:41:13.240211   69333 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:13.240412   69333 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:41:13.240498   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.243514   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.243963   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.243991   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.244174   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.244419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244641   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244838   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.245039   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.245263   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.245284   69333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:13.476519   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:13.476545   69333 machine.go:96] duration metric: took 1.000674334s to provisionDockerMachine
	I0927 01:41:13.476558   69333 start.go:293] postStartSetup for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:41:13.476574   69333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:13.476593   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.476914   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:13.476942   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.479326   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479662   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.479686   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479835   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.480027   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.480182   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.480337   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.563321   69333 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:13.567844   69333 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:13.567867   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:13.567929   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:13.568012   69333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:13.568109   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:13.578453   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:13.603888   69333 start.go:296] duration metric: took 127.316429ms for postStartSetup
	I0927 01:41:13.603924   69333 fix.go:56] duration metric: took 20.803606957s for fixHost
	I0927 01:41:13.603948   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.606500   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.606921   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.606949   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.607189   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.607419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607600   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607726   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.608048   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.608234   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.608245   69333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:13.708261   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401273.683707076
	
	I0927 01:41:13.708284   69333 fix.go:216] guest clock: 1727401273.683707076
	I0927 01:41:13.708293   69333 fix.go:229] Guest: 2024-09-27 01:41:13.683707076 +0000 UTC Remote: 2024-09-27 01:41:13.603929237 +0000 UTC m=+226.291347697 (delta=79.777839ms)
	I0927 01:41:13.708348   69333 fix.go:200] guest clock delta is within tolerance: 79.777839ms
	I0927 01:41:13.708357   69333 start.go:83] releasing machines lock for "old-k8s-version-612261", held for 20.90807118s
	I0927 01:41:13.708392   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.708665   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:13.711474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.711873   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.711905   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.712035   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712569   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712748   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712832   69333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:13.712878   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.712949   69333 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:13.712971   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.715681   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.715820   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716024   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716043   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716200   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716225   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716235   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716370   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716487   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716548   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716622   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716728   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716779   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.716859   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.826638   69333 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:13.832901   69333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:13.986132   69333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:13.992644   69333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:13.992728   69333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:14.008962   69333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:14.008991   69333 start.go:495] detecting cgroup driver to use...
	I0927 01:41:14.009051   69333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:14.025047   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:14.040807   69333 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:14.040857   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:14.055972   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:14.072654   69333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:14.210869   69333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:14.403536   69333 docker.go:233] disabling docker service ...
	I0927 01:41:14.403596   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:14.421549   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:14.436288   69333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:14.569634   69333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:14.701517   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:14.716794   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:14.740622   69333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:41:14.740685   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.756563   69333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:14.756626   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.768952   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.781314   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.793578   69333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:14.806302   69333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:14.822967   69333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:14.823036   69333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:14.837673   69333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:14.848486   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:14.988181   69333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:15.100581   69333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:15.100664   69333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:15.105816   69333 start.go:563] Will wait 60s for crictl version
	I0927 01:41:15.105883   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:15.110375   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:15.154944   69333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:15.155039   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.188172   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.220410   69333 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:41:15.221508   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:15.224474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.224855   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:15.224884   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.225126   69333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:15.229555   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:15.244862   69333 kubeadm.go:883] updating cluster {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:15.245007   69333 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:41:15.245070   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:15.298422   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:15.298501   69333 ssh_runner.go:195] Run: which lz4
	I0927 01:41:15.302771   69333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:15.307360   69333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:15.307398   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:41:17.053272   69333 crio.go:462] duration metric: took 1.750548806s to copy over tarball
	I0927 01:41:17.053354   69333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:20.066231   69333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012846531s)
	I0927 01:41:20.066257   69333 crio.go:469] duration metric: took 3.012954388s to extract the tarball
	I0927 01:41:20.066265   69333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:20.112486   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:20.152620   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:20.152647   69333 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:20.152723   69333 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.152754   69333 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.152789   69333 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.152813   69333 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.152816   69333 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.152763   69333 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.152938   69333 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:41:20.152940   69333 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154747   69333 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.154752   69333 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.154886   69333 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.154925   69333 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.154930   69333 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.154934   69333 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:41:20.316172   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.316352   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:41:20.319986   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.331224   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.342010   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.355732   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.355739   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.446420   69333 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:41:20.446477   69333 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.446529   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.469134   69333 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:41:20.469183   69333 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.469231   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.470229   69333 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:41:20.470264   69333 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:41:20.470310   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.477952   69333 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:41:20.477991   69333 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.478034   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.519340   69333 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:41:20.519391   69333 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.519454   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538237   69333 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:41:20.538256   69333 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:41:20.538293   69333 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.538298   69333 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.538389   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.538438   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.538489   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656448   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.656508   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.656542   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.656573   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.656635   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.656704   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656740   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.818479   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.818494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.818581   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.878325   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:41:20.878480   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.878494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.878585   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.885061   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.885168   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.898628   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:41:20.994147   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:41:20.994175   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:41:20.994211   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:21.016210   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:41:21.016289   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:41:21.035051   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:41:21.374949   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:21.520726   69333 cache_images.go:92] duration metric: took 1.368058485s to LoadCachedImages
	W0927 01:41:21.520817   69333 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0927 01:41:21.520833   69333 kubeadm.go:934] updating node { 192.168.72.129 8443 v1.20.0 crio true true} ...
	I0927 01:41:21.520951   69333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-612261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:21.521035   69333 ssh_runner.go:195] Run: crio config
	I0927 01:41:21.571651   69333 cni.go:84] Creating CNI manager for ""
	I0927 01:41:21.571677   69333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:21.571688   69333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:21.571712   69333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-612261 NodeName:old-k8s-version-612261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:41:21.571882   69333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-612261"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:21.571958   69333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:41:21.582735   69333 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:21.582802   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:21.593329   69333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0927 01:41:21.615040   69333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:21.636564   69333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 01:41:21.657275   69333 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:21.661675   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:21.674587   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:21.814300   69333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:21.834133   69333 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261 for IP: 192.168.72.129
	I0927 01:41:21.834163   69333 certs.go:194] generating shared ca certs ...
	I0927 01:41:21.834182   69333 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:21.834380   69333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:21.834437   69333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:21.834450   69333 certs.go:256] generating profile certs ...
	I0927 01:41:21.834558   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key
	I0927 01:41:21.834630   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e
	I0927 01:41:21.834676   69333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key
	I0927 01:41:21.834819   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:21.834859   69333 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:21.834873   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:21.834904   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:21.834937   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:21.834973   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:21.835023   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:21.835864   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:21.866955   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:21.902991   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:21.928957   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:21.957505   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:41:21.984055   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:22.013191   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:22.041745   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:41:22.069680   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:22.104139   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:22.130348   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:22.157976   69333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:22.177818   69333 ssh_runner.go:195] Run: openssl version
	I0927 01:41:22.184389   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:22.196133   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201047   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201120   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.207245   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:22.219033   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:22.230331   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235000   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235054   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.240963   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:22.252022   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:22.263197   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268023   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268100   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.274086   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:22.285387   69333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:22.290487   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:22.296953   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:22.303095   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:22.310001   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:22.316346   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:22.322559   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:22.328931   69333 kubeadm.go:392] StartCluster: {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:22.329015   69333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:22.329081   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:22.368989   69333 cri.go:89] found id: ""
	I0927 01:41:22.369059   69333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:22.379818   69333 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:22.379841   69333 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:22.379897   69333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:22.392278   69333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:22.393236   69333 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:41:22.393856   69333 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-14935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-612261" cluster setting kubeconfig missing "old-k8s-version-612261" context setting]
	I0927 01:41:22.394733   69333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:22.404625   69333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:22.415376   69333 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.129
	I0927 01:41:22.415414   69333 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:22.415427   69333 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:22.415487   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:22.452749   69333 cri.go:89] found id: ""
	I0927 01:41:22.452829   69333 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:22.469164   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:22.480018   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:22.480038   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:22.480092   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:41:22.490501   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:22.490562   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:22.500330   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:41:22.509612   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:22.509681   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:22.520064   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.529864   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:22.529921   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.540563   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:41:22.556739   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:22.556797   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:22.572858   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:22.583366   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:22.709007   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.468461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.714890   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.865174   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.959048   69333 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:23.959140   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.460104   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.959462   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.460143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.959473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.460051   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.960121   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:27.459491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:27.959944   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.459636   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.959766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.459410   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.959439   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.460176   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.959810   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.459492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.959966   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:32.459727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:32.959527   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.459351   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.959903   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.459444   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.959423   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.459435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.460148   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.959874   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:37.459766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:37.959594   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.459971   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.960093   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.459983   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.959812   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.460220   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.959253   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.459829   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.959864   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.459806   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.960200   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.459511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.959467   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.459352   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.960147   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.459637   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.959535   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.459585   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.959579   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:47.459645   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:47.959756   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.460088   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.959526   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.960102   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.460203   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.960225   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.460182   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.959343   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:52.459589   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:52.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.459448   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.960120   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.460016   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.959681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.959819   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.459221   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.959296   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:57.459172   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:57.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.459323   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.960219   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.459916   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.959858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.460249   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.959246   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.459839   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.959224   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:02.460232   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:02.959635   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.459610   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.959412   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.459857   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.959495   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.459972   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.959931   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.459460   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.959627   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:07.459395   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:07.959574   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.460234   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.959281   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.459240   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.959429   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.459865   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.959431   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.459459   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:12.459771   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:12.959727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.459428   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.959255   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.460003   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.959853   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.460237   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.959974   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.459420   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.959321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:17.459443   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:17.959426   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.460250   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.959989   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.459981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.959969   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.459758   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.959440   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.460115   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.959238   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:22.460161   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:22.959177   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.459481   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.959221   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:23.959322   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:24.004970   69333 cri.go:89] found id: ""
	I0927 01:42:24.004999   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.005010   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:24.005017   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:24.005076   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:24.041880   69333 cri.go:89] found id: ""
	I0927 01:42:24.041908   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.041919   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:24.041926   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:24.041991   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:24.082295   69333 cri.go:89] found id: ""
	I0927 01:42:24.082318   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.082325   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:24.082331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:24.082385   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:24.119663   69333 cri.go:89] found id: ""
	I0927 01:42:24.119692   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.119707   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:24.119714   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:24.119771   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:24.163893   69333 cri.go:89] found id: ""
	I0927 01:42:24.163920   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.163932   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:24.163940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:24.163999   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:24.200277   69333 cri.go:89] found id: ""
	I0927 01:42:24.200299   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.200307   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:24.200312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:24.200365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:24.235039   69333 cri.go:89] found id: ""
	I0927 01:42:24.235059   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.235066   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:24.235072   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:24.235132   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:24.275160   69333 cri.go:89] found id: ""
	I0927 01:42:24.275181   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.275188   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:24.275196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:24.275206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:24.327432   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:24.327465   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:24.341113   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:24.341139   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:24.473741   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:24.473764   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:24.473779   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:24.545888   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:24.545923   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:27.086673   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:27.100552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:27.100623   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:27.136182   69333 cri.go:89] found id: ""
	I0927 01:42:27.136207   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.136215   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:27.136221   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:27.136289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:27.173258   69333 cri.go:89] found id: ""
	I0927 01:42:27.173285   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.173296   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:27.173303   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:27.173373   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:27.210481   69333 cri.go:89] found id: ""
	I0927 01:42:27.210514   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.210526   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:27.210533   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:27.210586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:27.245168   69333 cri.go:89] found id: ""
	I0927 01:42:27.245192   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.245200   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:27.245206   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:27.245252   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:27.280494   69333 cri.go:89] found id: ""
	I0927 01:42:27.280522   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.280531   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:27.280538   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:27.280596   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:27.314281   69333 cri.go:89] found id: ""
	I0927 01:42:27.314307   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.314316   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:27.314322   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:27.314392   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:27.350838   69333 cri.go:89] found id: ""
	I0927 01:42:27.350861   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.350869   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:27.350874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:27.350921   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:27.390146   69333 cri.go:89] found id: ""
	I0927 01:42:27.390175   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.390186   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:27.390196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:27.390206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:27.446727   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:27.446756   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:27.461337   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:27.461365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:27.533818   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:27.533839   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:27.533874   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:27.614325   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:27.614357   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.161303   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:30.179521   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:30.179590   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:30.221738   69333 cri.go:89] found id: ""
	I0927 01:42:30.221764   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.221772   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:30.221778   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:30.221841   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:30.258316   69333 cri.go:89] found id: ""
	I0927 01:42:30.258349   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.258359   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:30.258369   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:30.258427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:30.297079   69333 cri.go:89] found id: ""
	I0927 01:42:30.297102   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.297109   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:30.297114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:30.297159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:30.337969   69333 cri.go:89] found id: ""
	I0927 01:42:30.337995   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.338007   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:30.338014   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:30.338075   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:30.375946   69333 cri.go:89] found id: ""
	I0927 01:42:30.375975   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.375986   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:30.375993   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:30.376054   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:30.411673   69333 cri.go:89] found id: ""
	I0927 01:42:30.411700   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.411710   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:30.411718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:30.411765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:30.447784   69333 cri.go:89] found id: ""
	I0927 01:42:30.447812   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.447822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:30.447830   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:30.447890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:30.483164   69333 cri.go:89] found id: ""
	I0927 01:42:30.483191   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.483202   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:30.483213   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:30.483229   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:30.533490   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:30.533522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:30.547688   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:30.547722   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:30.626696   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:30.626720   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:30.626733   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:30.708767   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:30.708809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:33.250034   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:33.263733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:33.263805   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:33.298038   69333 cri.go:89] found id: ""
	I0927 01:42:33.298063   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.298071   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:33.298077   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:33.298139   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:33.338027   69333 cri.go:89] found id: ""
	I0927 01:42:33.338050   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.338058   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:33.338064   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:33.338118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:33.376470   69333 cri.go:89] found id: ""
	I0927 01:42:33.376496   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.376504   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:33.376509   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:33.376568   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:33.419831   69333 cri.go:89] found id: ""
	I0927 01:42:33.419859   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.419868   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:33.419874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:33.419929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:33.461029   69333 cri.go:89] found id: ""
	I0927 01:42:33.461057   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.461076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:33.461085   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:33.461158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:33.499968   69333 cri.go:89] found id: ""
	I0927 01:42:33.499996   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.500007   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:33.500015   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:33.500073   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:33.552601   69333 cri.go:89] found id: ""
	I0927 01:42:33.552625   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.552633   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:33.552640   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:33.552702   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:33.589491   69333 cri.go:89] found id: ""
	I0927 01:42:33.589520   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.589529   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:33.589540   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:33.589554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:33.643437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:33.643470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:33.657819   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:33.657846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:33.728369   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:33.728393   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:33.728407   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:33.803661   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:33.803691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:36.343598   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:36.357879   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:36.357937   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:36.398936   69333 cri.go:89] found id: ""
	I0927 01:42:36.398958   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.398966   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:36.398971   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:36.399016   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:36.438897   69333 cri.go:89] found id: ""
	I0927 01:42:36.438921   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.438928   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:36.438935   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:36.438979   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:36.476779   69333 cri.go:89] found id: ""
	I0927 01:42:36.476807   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.476817   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:36.476824   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:36.476882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:36.514216   69333 cri.go:89] found id: ""
	I0927 01:42:36.514238   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.514245   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:36.514251   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:36.514306   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:36.551800   69333 cri.go:89] found id: ""
	I0927 01:42:36.551827   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.551835   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:36.551841   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:36.551900   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:36.592060   69333 cri.go:89] found id: ""
	I0927 01:42:36.592086   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.592096   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:36.592101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:36.592172   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:36.633485   69333 cri.go:89] found id: ""
	I0927 01:42:36.633507   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.633514   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:36.633519   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:36.633571   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:36.667288   69333 cri.go:89] found id: ""
	I0927 01:42:36.667355   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.667366   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:36.667377   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:36.667391   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:36.722230   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:36.722263   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:36.735927   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:36.735952   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:36.808852   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:36.808872   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:36.808887   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:36.889259   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:36.889299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:39.438818   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:39.459082   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:39.459150   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:39.499966   69333 cri.go:89] found id: ""
	I0927 01:42:39.499991   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.499999   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:39.500004   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:39.500050   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:39.540828   69333 cri.go:89] found id: ""
	I0927 01:42:39.540850   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.540857   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:39.540864   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:39.540972   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:39.575841   69333 cri.go:89] found id: ""
	I0927 01:42:39.575868   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.575879   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:39.575886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:39.575958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:39.611105   69333 cri.go:89] found id: ""
	I0927 01:42:39.611184   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.611202   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:39.611212   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:39.611268   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:39.644772   69333 cri.go:89] found id: ""
	I0927 01:42:39.644800   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.644808   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:39.644813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:39.644868   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:39.679875   69333 cri.go:89] found id: ""
	I0927 01:42:39.679901   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.679912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:39.679919   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:39.679987   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:39.716410   69333 cri.go:89] found id: ""
	I0927 01:42:39.716440   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.716450   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:39.716457   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:39.716525   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:39.750391   69333 cri.go:89] found id: ""
	I0927 01:42:39.750418   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.750428   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:39.750439   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:39.750455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:39.822365   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:39.822401   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:39.822416   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:39.905982   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:39.906017   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:39.952310   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:39.952339   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:40.000523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:40.000554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:42.514379   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:42.528312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:42.528377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:42.562427   69333 cri.go:89] found id: ""
	I0927 01:42:42.562455   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.562463   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:42.562469   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:42.562526   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:42.599969   69333 cri.go:89] found id: ""
	I0927 01:42:42.599993   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.600002   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:42.600007   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:42.600053   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:42.636338   69333 cri.go:89] found id: ""
	I0927 01:42:42.636364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.636371   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:42.636376   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:42.636431   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:42.670781   69333 cri.go:89] found id: ""
	I0927 01:42:42.670809   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.670818   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:42.670823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:42.670880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:42.707334   69333 cri.go:89] found id: ""
	I0927 01:42:42.707364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.707375   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:42.707431   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:42.707503   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:42.743063   69333 cri.go:89] found id: ""
	I0927 01:42:42.743092   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.743103   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:42.743139   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:42.743192   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:42.778593   69333 cri.go:89] found id: ""
	I0927 01:42:42.778617   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.778628   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:42.778634   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:42.778700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:42.814261   69333 cri.go:89] found id: ""
	I0927 01:42:42.814286   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.814293   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:42.814300   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:42.814310   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:42.863982   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:42.864011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:42.877151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:42.877175   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:42.959233   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:42.959251   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:42.959262   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:43.038773   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:43.038805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:45.581272   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:45.596103   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:45.596167   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:45.639507   69333 cri.go:89] found id: ""
	I0927 01:42:45.639531   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.639539   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:45.639545   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:45.639611   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:45.678455   69333 cri.go:89] found id: ""
	I0927 01:42:45.678482   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.678489   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:45.678495   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:45.678539   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:45.722094   69333 cri.go:89] found id: ""
	I0927 01:42:45.722123   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.722135   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:45.722142   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:45.722211   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:45.758091   69333 cri.go:89] found id: ""
	I0927 01:42:45.758118   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.758127   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:45.758133   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:45.758183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:45.792976   69333 cri.go:89] found id: ""
	I0927 01:42:45.793010   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.793021   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:45.793028   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:45.793089   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:45.830235   69333 cri.go:89] found id: ""
	I0927 01:42:45.830262   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.830273   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:45.830280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:45.830324   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:45.865896   69333 cri.go:89] found id: ""
	I0927 01:42:45.865928   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.865938   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:45.865946   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:45.866000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:45.900058   69333 cri.go:89] found id: ""
	I0927 01:42:45.900088   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.900099   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:45.900108   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:45.900119   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:45.972986   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:45.973015   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:45.973030   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:46.048703   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:46.048732   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:46.087483   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:46.087515   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:46.136833   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:46.136866   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:48.650738   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:48.665847   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:48.665930   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:48.704304   69333 cri.go:89] found id: ""
	I0927 01:42:48.704328   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.704337   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:48.704342   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:48.704402   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:48.742469   69333 cri.go:89] found id: ""
	I0927 01:42:48.742499   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.742510   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:48.742517   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:48.742579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:48.782154   69333 cri.go:89] found id: ""
	I0927 01:42:48.782183   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.782194   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:48.782201   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:48.782261   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:48.821686   69333 cri.go:89] found id: ""
	I0927 01:42:48.821709   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.821717   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:48.821723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:48.821781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:48.867072   69333 cri.go:89] found id: ""
	I0927 01:42:48.867099   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.867109   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:48.867123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:48.867191   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:48.908215   69333 cri.go:89] found id: ""
	I0927 01:42:48.908241   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.908249   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:48.908255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:48.908312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:48.945260   69333 cri.go:89] found id: ""
	I0927 01:42:48.945291   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.945303   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:48.945310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:48.945375   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:48.983285   69333 cri.go:89] found id: ""
	I0927 01:42:48.983325   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.983333   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:48.983343   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:48.983354   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:49.039437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:49.039472   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:49.053546   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:49.053571   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:49.129264   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:49.129286   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:49.129299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:49.216967   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:49.216999   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:51.758143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:51.771417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:51.771485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:51.806120   69333 cri.go:89] found id: ""
	I0927 01:42:51.806144   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.806154   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:51.806161   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:51.806219   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:51.840301   69333 cri.go:89] found id: ""
	I0927 01:42:51.840330   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.840340   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:51.840348   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:51.840410   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:51.874908   69333 cri.go:89] found id: ""
	I0927 01:42:51.874934   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.874944   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:51.874952   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:51.875018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:51.910960   69333 cri.go:89] found id: ""
	I0927 01:42:51.910988   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.910999   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:51.911006   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:51.911064   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:51.945206   69333 cri.go:89] found id: ""
	I0927 01:42:51.945228   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.945236   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:51.945241   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:51.945289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:51.979262   69333 cri.go:89] found id: ""
	I0927 01:42:51.979296   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.979322   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:51.979328   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:51.979384   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:52.013407   69333 cri.go:89] found id: ""
	I0927 01:42:52.013438   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.013449   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:52.013456   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:52.013510   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:52.048928   69333 cri.go:89] found id: ""
	I0927 01:42:52.048951   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.048961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:52.048970   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:52.048984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:52.101043   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:52.101083   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:52.115903   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:52.115938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:52.197147   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:52.197168   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:52.197184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:52.276352   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:52.276393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:54.819649   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:54.832262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:54.832344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:54.867495   69333 cri.go:89] found id: ""
	I0927 01:42:54.867523   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.867533   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:54.867539   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:54.867585   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:54.899705   69333 cri.go:89] found id: ""
	I0927 01:42:54.899732   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.899742   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:54.899749   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:54.899817   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:54.939216   69333 cri.go:89] found id: ""
	I0927 01:42:54.939235   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.939244   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:54.939249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:54.939293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:54.976603   69333 cri.go:89] found id: ""
	I0927 01:42:54.976632   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.976643   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:54.976651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:54.976718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:55.011617   69333 cri.go:89] found id: ""
	I0927 01:42:55.011649   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.011660   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:55.011667   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:55.011729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:55.048836   69333 cri.go:89] found id: ""
	I0927 01:42:55.048861   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.048869   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:55.048885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:55.048955   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:55.085105   69333 cri.go:89] found id: ""
	I0927 01:42:55.085133   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.085144   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:55.085151   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:55.085205   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:55.122536   69333 cri.go:89] found id: ""
	I0927 01:42:55.122564   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.122575   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:55.122585   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:55.122600   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:55.197191   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:55.197216   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:55.197230   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:55.275914   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:55.275950   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:55.315043   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:55.315071   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:55.365808   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:55.365846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:57.880934   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:57.894276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:57.894337   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:57.933299   69333 cri.go:89] found id: ""
	I0927 01:42:57.933326   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.933336   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:57.933343   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:57.933403   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:57.969070   69333 cri.go:89] found id: ""
	I0927 01:42:57.969094   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.969102   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:57.969107   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:57.969151   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:58.009432   69333 cri.go:89] found id: ""
	I0927 01:42:58.009453   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.009462   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:58.009468   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:58.009524   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:58.046507   69333 cri.go:89] found id: ""
	I0927 01:42:58.046526   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.046533   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:58.046539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:58.046603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:58.079910   69333 cri.go:89] found id: ""
	I0927 01:42:58.079936   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.079947   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:58.079954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:58.080015   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:58.115971   69333 cri.go:89] found id: ""
	I0927 01:42:58.115994   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.116001   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:58.116007   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:58.116065   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:58.150512   69333 cri.go:89] found id: ""
	I0927 01:42:58.150536   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.150544   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:58.150549   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:58.150608   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:58.183458   69333 cri.go:89] found id: ""
	I0927 01:42:58.183487   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.183498   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:58.183506   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:58.183520   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:58.234404   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:58.234434   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:58.248387   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:58.248411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:58.320751   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:58.320772   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:58.320783   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:58.401163   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:58.401212   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:00.943677   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:00.956739   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:00.956815   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:00.991020   69333 cri.go:89] found id: ""
	I0927 01:43:00.991042   69333 logs.go:276] 0 containers: []
	W0927 01:43:00.991051   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:00.991056   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:00.991113   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:01.031686   69333 cri.go:89] found id: ""
	I0927 01:43:01.031711   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.031720   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:01.031726   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:01.031786   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:01.068783   69333 cri.go:89] found id: ""
	I0927 01:43:01.068813   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.068824   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:01.068831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:01.068890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:01.108434   69333 cri.go:89] found id: ""
	I0927 01:43:01.108456   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.108464   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:01.108469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:01.108513   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:01.147574   69333 cri.go:89] found id: ""
	I0927 01:43:01.147596   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.147604   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:01.147610   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:01.147660   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:01.188251   69333 cri.go:89] found id: ""
	I0927 01:43:01.188279   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.188290   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:01.188297   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:01.188359   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:01.224901   69333 cri.go:89] found id: ""
	I0927 01:43:01.224944   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.224964   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:01.224974   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:01.225052   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:01.262701   69333 cri.go:89] found id: ""
	I0927 01:43:01.262728   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.262738   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:01.262749   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:01.262762   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:01.313872   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:01.313900   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:01.327809   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:01.327835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:01.400864   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:01.400895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:01.400909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:01.478012   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:01.478045   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:04.018634   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:04.032732   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:04.032803   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:04.075258   69333 cri.go:89] found id: ""
	I0927 01:43:04.075285   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.075293   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:04.075299   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:04.075381   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:04.108738   69333 cri.go:89] found id: ""
	I0927 01:43:04.108764   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.108773   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:04.108779   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:04.108835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:04.142115   69333 cri.go:89] found id: ""
	I0927 01:43:04.142145   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.142155   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:04.142174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:04.142249   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:04.184606   69333 cri.go:89] found id: ""
	I0927 01:43:04.184626   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.184634   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:04.184639   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:04.184684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:04.218391   69333 cri.go:89] found id: ""
	I0927 01:43:04.218420   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.218428   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:04.218434   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:04.218482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.253796   69333 cri.go:89] found id: ""
	I0927 01:43:04.253816   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.253824   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:04.253829   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:04.253884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:04.289147   69333 cri.go:89] found id: ""
	I0927 01:43:04.289170   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.289179   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:04.289184   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:04.289245   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:04.329000   69333 cri.go:89] found id: ""
	I0927 01:43:04.329026   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.329034   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:04.329042   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:04.329053   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:04.424255   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:04.424290   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:04.470746   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:04.470775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:04.524208   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:04.524237   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:04.538338   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:04.538365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:04.608713   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.109492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:07.124253   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:07.124332   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:07.160443   69333 cri.go:89] found id: ""
	I0927 01:43:07.160470   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.160481   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:07.160488   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:07.160554   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:07.195492   69333 cri.go:89] found id: ""
	I0927 01:43:07.195515   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.195522   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:07.195527   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:07.195572   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:07.237678   69333 cri.go:89] found id: ""
	I0927 01:43:07.237708   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.237718   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:07.237725   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:07.237792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:07.274239   69333 cri.go:89] found id: ""
	I0927 01:43:07.274268   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.274279   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:07.274286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:07.274352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:07.315099   69333 cri.go:89] found id: ""
	I0927 01:43:07.315124   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.315131   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:07.315137   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:07.315190   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:07.356301   69333 cri.go:89] found id: ""
	I0927 01:43:07.356328   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.356339   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:07.356347   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:07.356416   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:07.392204   69333 cri.go:89] found id: ""
	I0927 01:43:07.392232   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.392242   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:07.392255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:07.392312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:07.428924   69333 cri.go:89] found id: ""
	I0927 01:43:07.428952   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.428961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:07.428969   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:07.428981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:07.502507   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.502531   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:07.502545   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:07.584169   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:07.584201   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:07.623413   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:07.623446   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:07.675444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:07.675480   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.190164   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:10.205315   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:10.205395   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:10.244030   69333 cri.go:89] found id: ""
	I0927 01:43:10.244053   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.244063   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:10.244071   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:10.244134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:10.280081   69333 cri.go:89] found id: ""
	I0927 01:43:10.280108   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.280118   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:10.280125   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:10.280184   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:10.315428   69333 cri.go:89] found id: ""
	I0927 01:43:10.315454   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.315464   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:10.315471   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:10.315531   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:10.352536   69333 cri.go:89] found id: ""
	I0927 01:43:10.352560   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.352567   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:10.352574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:10.352634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:10.388846   69333 cri.go:89] found id: ""
	I0927 01:43:10.388870   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.388880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:10.388887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:10.388951   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:10.427746   69333 cri.go:89] found id: ""
	I0927 01:43:10.427771   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.427779   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:10.427784   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:10.427839   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:10.473126   69333 cri.go:89] found id: ""
	I0927 01:43:10.473155   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.473166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:10.473172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:10.473234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:10.511925   69333 cri.go:89] found id: ""
	I0927 01:43:10.511954   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.511962   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:10.511971   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:10.511984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:10.551428   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:10.551459   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:10.603655   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:10.603691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.617232   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:10.617266   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:10.696559   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:10.696585   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:10.696599   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:13.273888   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:13.288271   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:13.288349   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:13.325796   69333 cri.go:89] found id: ""
	I0927 01:43:13.325823   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.325831   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:13.325837   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:13.325893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:13.360721   69333 cri.go:89] found id: ""
	I0927 01:43:13.360748   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.360756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:13.360762   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:13.360821   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:13.399722   69333 cri.go:89] found id: ""
	I0927 01:43:13.399749   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.399756   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:13.399762   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:13.399826   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:13.437161   69333 cri.go:89] found id: ""
	I0927 01:43:13.437187   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.437194   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:13.437200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:13.437260   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:13.474735   69333 cri.go:89] found id: ""
	I0927 01:43:13.474758   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.474766   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:13.474771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:13.474822   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:13.528726   69333 cri.go:89] found id: ""
	I0927 01:43:13.528754   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.528764   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:13.528771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:13.528837   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:13.568617   69333 cri.go:89] found id: ""
	I0927 01:43:13.568642   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.568651   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:13.568658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:13.568726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:13.605820   69333 cri.go:89] found id: ""
	I0927 01:43:13.605846   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.605857   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:13.605868   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:13.605883   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:13.682586   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:13.682609   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:13.682624   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:13.764487   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:13.764522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:13.809248   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:13.809280   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:13.861331   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:13.861371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:16.376981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:16.391787   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:16.391842   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:16.432731   69333 cri.go:89] found id: ""
	I0927 01:43:16.432758   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.432767   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:16.432775   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:16.432836   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:16.466769   69333 cri.go:89] found id: ""
	I0927 01:43:16.466798   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.466806   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:16.466812   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:16.466860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:16.501899   69333 cri.go:89] found id: ""
	I0927 01:43:16.501927   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.501940   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:16.501947   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:16.502000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:16.537356   69333 cri.go:89] found id: ""
	I0927 01:43:16.537383   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.537393   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:16.537401   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:16.537460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:16.573910   69333 cri.go:89] found id: ""
	I0927 01:43:16.573937   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.573946   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:16.573951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:16.574003   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:16.617780   69333 cri.go:89] found id: ""
	I0927 01:43:16.617808   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.617818   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:16.617826   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:16.617884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:16.653262   69333 cri.go:89] found id: ""
	I0927 01:43:16.653311   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.653323   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:16.653331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:16.653394   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:16.689861   69333 cri.go:89] found id: ""
	I0927 01:43:16.689889   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.689898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:16.689909   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:16.689922   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:16.765961   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:16.765986   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:16.766001   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:16.845195   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:16.845227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:16.889159   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:16.889188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:16.945523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:16.945558   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:19.461132   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:19.475148   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:19.475234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:19.511487   69333 cri.go:89] found id: ""
	I0927 01:43:19.511509   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.511517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:19.511522   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:19.511580   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:19.545726   69333 cri.go:89] found id: ""
	I0927 01:43:19.545750   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.545756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:19.545763   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:19.545830   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:19.581287   69333 cri.go:89] found id: ""
	I0927 01:43:19.581310   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.581318   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:19.581323   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:19.581376   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:19.614179   69333 cri.go:89] found id: ""
	I0927 01:43:19.614205   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.614215   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:19.614223   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:19.614286   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:19.648276   69333 cri.go:89] found id: ""
	I0927 01:43:19.648307   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.648318   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:19.648330   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:19.648390   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:19.683051   69333 cri.go:89] found id: ""
	I0927 01:43:19.683083   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.683094   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:19.683114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:19.683166   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:19.716664   69333 cri.go:89] found id: ""
	I0927 01:43:19.716686   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.716694   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:19.716700   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:19.716745   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:19.758948   69333 cri.go:89] found id: ""
	I0927 01:43:19.758969   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.758976   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:19.758984   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:19.758996   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:19.797751   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:19.797777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:19.853605   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:19.853635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:19.867785   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:19.867815   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:19.950323   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:19.950350   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:19.950363   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:22.555421   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:22.570013   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:22.570077   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:22.605007   69333 cri.go:89] found id: ""
	I0927 01:43:22.605034   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.605055   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:22.605062   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:22.605122   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:22.640350   69333 cri.go:89] found id: ""
	I0927 01:43:22.640381   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.640391   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:22.640406   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:22.640482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:22.677464   69333 cri.go:89] found id: ""
	I0927 01:43:22.677489   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.677499   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:22.677506   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:22.677567   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:22.721978   69333 cri.go:89] found id: ""
	I0927 01:43:22.722017   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.722025   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:22.722032   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:22.722093   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:22.757694   69333 cri.go:89] found id: ""
	I0927 01:43:22.757720   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.757729   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:22.757733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:22.757781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:22.793872   69333 cri.go:89] found id: ""
	I0927 01:43:22.793903   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.793912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:22.793920   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:22.793971   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:22.830620   69333 cri.go:89] found id: ""
	I0927 01:43:22.830652   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.830662   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:22.830669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:22.830732   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:22.867341   69333 cri.go:89] found id: ""
	I0927 01:43:22.867370   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.867381   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:22.867392   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:22.867405   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:22.939592   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:22.939630   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:22.939654   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:23.016407   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:23.016447   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:23.058490   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:23.058522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:23.109527   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:23.109560   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:25.626109   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:25.645254   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:25.645343   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:25.707951   69333 cri.go:89] found id: ""
	I0927 01:43:25.707979   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.707989   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:25.707997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:25.708057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:25.771210   69333 cri.go:89] found id: ""
	I0927 01:43:25.771234   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.771242   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:25.771248   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:25.771295   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:25.808206   69333 cri.go:89] found id: ""
	I0927 01:43:25.808235   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.808245   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:25.808252   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:25.808311   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:25.842236   69333 cri.go:89] found id: ""
	I0927 01:43:25.842265   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.842275   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:25.842283   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:25.842328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:25.879220   69333 cri.go:89] found id: ""
	I0927 01:43:25.879248   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.879256   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:25.879262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:25.879333   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:25.913491   69333 cri.go:89] found id: ""
	I0927 01:43:25.913522   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.913532   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:25.913537   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:25.913595   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:25.946867   69333 cri.go:89] found id: ""
	I0927 01:43:25.946887   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.946894   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:25.946899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:25.946943   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:25.983792   69333 cri.go:89] found id: ""
	I0927 01:43:25.983813   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.983820   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:25.983828   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:25.983838   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:26.030169   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:26.030195   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:26.083242   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:26.083276   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:26.097109   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:26.097136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:26.168675   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:26.168703   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:26.168715   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:28.750349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:28.765211   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:28.765269   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:28.804760   69333 cri.go:89] found id: ""
	I0927 01:43:28.804784   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.804792   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:28.804798   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:28.804865   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:28.842576   69333 cri.go:89] found id: ""
	I0927 01:43:28.842597   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.842604   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:28.842612   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:28.842674   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:28.877498   69333 cri.go:89] found id: ""
	I0927 01:43:28.877529   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.877541   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:28.877553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:28.877615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:28.912583   69333 cri.go:89] found id: ""
	I0927 01:43:28.912609   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.912620   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:28.912627   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:28.912689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:28.947995   69333 cri.go:89] found id: ""
	I0927 01:43:28.948019   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.948030   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:28.948037   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:28.948135   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:28.984445   69333 cri.go:89] found id: ""
	I0927 01:43:28.984470   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.984480   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:28.984488   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:28.984551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:29.020345   69333 cri.go:89] found id: ""
	I0927 01:43:29.020374   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.020385   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:29.020392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:29.020451   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:29.056204   69333 cri.go:89] found id: ""
	I0927 01:43:29.056234   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.056245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:29.056257   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:29.056270   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:29.127936   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:29.127963   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:29.127980   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:29.205933   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:29.205981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.248745   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:29.248777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:29.302316   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:29.302348   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:31.817566   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:31.831179   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:31.831253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:31.868480   69333 cri.go:89] found id: ""
	I0927 01:43:31.868507   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.868517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:31.868528   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:31.868588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:31.901656   69333 cri.go:89] found id: ""
	I0927 01:43:31.901684   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.901694   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:31.901701   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:31.901761   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:31.937101   69333 cri.go:89] found id: ""
	I0927 01:43:31.937133   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.937145   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:31.937153   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:31.937210   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:31.970724   69333 cri.go:89] found id: ""
	I0927 01:43:31.970750   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.970761   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:31.970768   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:31.970835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:32.003704   69333 cri.go:89] found id: ""
	I0927 01:43:32.003736   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.003747   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:32.003754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:32.003813   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:32.038840   69333 cri.go:89] found id: ""
	I0927 01:43:32.038869   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.038879   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:32.038886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:32.038946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:32.075506   69333 cri.go:89] found id: ""
	I0927 01:43:32.075534   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.075545   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:32.075552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:32.075603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:32.112983   69333 cri.go:89] found id: ""
	I0927 01:43:32.113009   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.113020   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:32.113031   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:32.113046   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:32.168192   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:32.168227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:32.182702   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:32.182727   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:32.255797   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:32.255824   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:32.255835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:32.336083   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:32.336115   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:34.880981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:34.894904   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:34.894976   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:34.933459   69333 cri.go:89] found id: ""
	I0927 01:43:34.933482   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.933490   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:34.933498   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:34.933555   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:34.966893   69333 cri.go:89] found id: ""
	I0927 01:43:34.966917   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.966926   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:34.966933   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:34.966992   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:35.002878   69333 cri.go:89] found id: ""
	I0927 01:43:35.002899   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.002907   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:35.002912   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:35.002970   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:35.039871   69333 cri.go:89] found id: ""
	I0927 01:43:35.039898   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.039908   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:35.039915   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:35.039977   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:35.078229   69333 cri.go:89] found id: ""
	I0927 01:43:35.078255   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.078267   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:35.078274   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:35.078342   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:35.114369   69333 cri.go:89] found id: ""
	I0927 01:43:35.114397   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.114408   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:35.114415   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:35.114475   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:35.148072   69333 cri.go:89] found id: ""
	I0927 01:43:35.148100   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.148110   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:35.148117   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:35.148188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:35.184020   69333 cri.go:89] found id: ""
	I0927 01:43:35.184051   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.184062   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:35.184073   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:35.184086   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:35.197332   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:35.197355   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:35.273860   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:35.273889   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:35.273904   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:35.354647   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:35.354680   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:35.392622   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:35.392651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:37.943024   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:37.957265   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:37.957329   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:37.991294   69333 cri.go:89] found id: ""
	I0927 01:43:37.991348   69333 logs.go:276] 0 containers: []
	W0927 01:43:37.991362   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:37.991368   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:37.991421   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:38.026960   69333 cri.go:89] found id: ""
	I0927 01:43:38.026981   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.026990   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:38.026998   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:38.027057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:38.063540   69333 cri.go:89] found id: ""
	I0927 01:43:38.063563   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.063571   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:38.063576   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:38.063627   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:38.099554   69333 cri.go:89] found id: ""
	I0927 01:43:38.099602   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.099613   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:38.099621   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:38.099689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:38.136576   69333 cri.go:89] found id: ""
	I0927 01:43:38.136604   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.136615   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:38.136623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:38.136676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:38.170411   69333 cri.go:89] found id: ""
	I0927 01:43:38.170441   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.170452   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:38.170458   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:38.170512   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:38.211902   69333 cri.go:89] found id: ""
	I0927 01:43:38.211934   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.211945   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:38.211951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:38.212007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:38.247850   69333 cri.go:89] found id: ""
	I0927 01:43:38.247875   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.247885   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:38.247895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:38.247913   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:38.329353   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:38.329384   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:38.369114   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:38.369148   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:38.420578   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:38.420613   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:38.434019   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:38.434050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:38.517921   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.018609   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:41.032308   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:41.032370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:41.068491   69333 cri.go:89] found id: ""
	I0927 01:43:41.068518   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.068529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:41.068536   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:41.068597   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:41.106527   69333 cri.go:89] found id: ""
	I0927 01:43:41.106555   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.106565   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:41.106571   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:41.106634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:41.142846   69333 cri.go:89] found id: ""
	I0927 01:43:41.142870   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.142880   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:41.142887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:41.142949   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:41.187499   69333 cri.go:89] found id: ""
	I0927 01:43:41.187525   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.187536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:41.187544   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:41.187606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:41.226040   69333 cri.go:89] found id: ""
	I0927 01:43:41.226063   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.226070   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:41.226076   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:41.226153   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:41.261399   69333 cri.go:89] found id: ""
	I0927 01:43:41.261429   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.261440   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:41.261446   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:41.261493   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:41.300709   69333 cri.go:89] found id: ""
	I0927 01:43:41.300730   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.300737   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:41.300743   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:41.300799   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:41.335725   69333 cri.go:89] found id: ""
	I0927 01:43:41.335751   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.335759   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:41.335767   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:41.335776   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:41.387756   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:41.387788   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:41.401717   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:41.401743   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:41.479524   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.479548   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:41.479562   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:41.559926   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:41.559959   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:44.107615   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:44.122628   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:44.122690   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:44.163496   69333 cri.go:89] found id: ""
	I0927 01:43:44.163521   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.163529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:44.163541   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:44.163588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:44.203488   69333 cri.go:89] found id: ""
	I0927 01:43:44.203519   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.203529   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:44.203535   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:44.203600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:44.238111   69333 cri.go:89] found id: ""
	I0927 01:43:44.238141   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.238148   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:44.238154   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:44.238221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:44.272954   69333 cri.go:89] found id: ""
	I0927 01:43:44.272981   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.272991   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:44.272998   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:44.273057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:44.309700   69333 cri.go:89] found id: ""
	I0927 01:43:44.309719   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.309726   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:44.309731   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:44.309776   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:44.344532   69333 cri.go:89] found id: ""
	I0927 01:43:44.344563   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.344573   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:44.344580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:44.344641   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:44.379354   69333 cri.go:89] found id: ""
	I0927 01:43:44.379380   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.379391   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:44.379399   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:44.379461   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:44.415297   69333 cri.go:89] found id: ""
	I0927 01:43:44.415344   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.415356   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:44.415366   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:44.415381   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:44.468570   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:44.468602   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:44.483419   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:44.483453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:44.560718   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:44.560737   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:44.560753   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:44.641130   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:44.641173   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.188520   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:47.202189   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:47.202262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:47.243051   69333 cri.go:89] found id: ""
	I0927 01:43:47.243075   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.243083   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:47.243089   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:47.243155   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:47.280071   69333 cri.go:89] found id: ""
	I0927 01:43:47.280094   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.280104   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:47.280111   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:47.280170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:47.318458   69333 cri.go:89] found id: ""
	I0927 01:43:47.318482   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.318492   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:47.318499   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:47.318551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:47.352891   69333 cri.go:89] found id: ""
	I0927 01:43:47.352916   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.352926   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:47.352933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:47.352997   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:47.387534   69333 cri.go:89] found id: ""
	I0927 01:43:47.387560   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.387569   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:47.387578   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:47.387646   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:47.422221   69333 cri.go:89] found id: ""
	I0927 01:43:47.422254   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.422265   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:47.422273   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:47.422330   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:47.459624   69333 cri.go:89] found id: ""
	I0927 01:43:47.459645   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.459653   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:47.459659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:47.459706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:47.494322   69333 cri.go:89] found id: ""
	I0927 01:43:47.494347   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.494355   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:47.494363   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:47.494375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:47.508031   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:47.508056   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:47.583920   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:47.583952   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:47.583968   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:47.665533   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:47.665568   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.708423   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:47.708455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.261602   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:50.275548   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:50.275607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:50.311583   69333 cri.go:89] found id: ""
	I0927 01:43:50.311610   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.311620   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:50.311627   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:50.311687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:50.347686   69333 cri.go:89] found id: ""
	I0927 01:43:50.347709   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.347721   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:50.347729   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:50.347778   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:50.386627   69333 cri.go:89] found id: ""
	I0927 01:43:50.386654   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.386663   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:50.386669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:50.386719   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:50.421512   69333 cri.go:89] found id: ""
	I0927 01:43:50.421538   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.421547   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:50.421552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:50.421603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:50.461849   69333 cri.go:89] found id: ""
	I0927 01:43:50.461872   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.461880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:50.461885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:50.461941   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:50.496517   69333 cri.go:89] found id: ""
	I0927 01:43:50.496540   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.496548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:50.496554   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:50.496600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:50.532595   69333 cri.go:89] found id: ""
	I0927 01:43:50.532619   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.532630   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:50.532638   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:50.532687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:50.573213   69333 cri.go:89] found id: ""
	I0927 01:43:50.573241   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.573252   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:50.573262   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:50.573275   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.625600   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:50.625633   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:50.639512   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:50.639535   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:50.708393   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:50.708415   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:50.708436   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:50.789812   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:50.789845   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:53.335858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:53.349369   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:53.349441   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:53.386922   69333 cri.go:89] found id: ""
	I0927 01:43:53.386947   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.386955   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:53.386961   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:53.387007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:53.423614   69333 cri.go:89] found id: ""
	I0927 01:43:53.423640   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.423651   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:53.423658   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:53.423721   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:53.463245   69333 cri.go:89] found id: ""
	I0927 01:43:53.463265   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.463273   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:53.463280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:53.463344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:53.502093   69333 cri.go:89] found id: ""
	I0927 01:43:53.502123   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.502133   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:53.502140   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:53.502196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:53.538616   69333 cri.go:89] found id: ""
	I0927 01:43:53.538641   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.538652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:53.538659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:53.538716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:53.578580   69333 cri.go:89] found id: ""
	I0927 01:43:53.578609   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.578617   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:53.578623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:53.578685   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:53.615240   69333 cri.go:89] found id: ""
	I0927 01:43:53.615266   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.615275   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:53.615282   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:53.615356   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:53.650987   69333 cri.go:89] found id: ""
	I0927 01:43:53.651011   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.651019   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:53.651028   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:53.651038   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:53.664817   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:53.664841   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:53.737875   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:53.737894   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:53.737909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:53.827293   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:53.827345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:53.867157   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:53.867188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:56.423435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:56.437837   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:56.437912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:56.480328   69333 cri.go:89] found id: ""
	I0927 01:43:56.480349   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.480357   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:56.480364   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:56.480427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:56.520627   69333 cri.go:89] found id: ""
	I0927 01:43:56.520651   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.520660   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:56.520667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:56.520726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:56.561527   69333 cri.go:89] found id: ""
	I0927 01:43:56.561555   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.561567   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:56.561574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:56.561634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:56.598751   69333 cri.go:89] found id: ""
	I0927 01:43:56.598783   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.598794   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:56.598801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:56.598861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:56.634378   69333 cri.go:89] found id: ""
	I0927 01:43:56.634410   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.634422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:56.634429   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:56.634489   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:56.669819   69333 cri.go:89] found id: ""
	I0927 01:43:56.669852   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.669863   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:56.669877   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:56.669929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:56.703715   69333 cri.go:89] found id: ""
	I0927 01:43:56.703740   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.703750   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:56.703757   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:56.703820   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:56.737208   69333 cri.go:89] found id: ""
	I0927 01:43:56.737234   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.737245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:56.737255   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:56.737269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:56.749933   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:56.749960   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:56.822331   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:56.822353   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:56.822369   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:56.904415   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:56.904454   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:56.947108   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:56.947136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:59.500580   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:59.523807   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:59.523888   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:59.562931   69333 cri.go:89] found id: ""
	I0927 01:43:59.562955   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.562963   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:59.562968   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:59.563013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:59.599321   69333 cri.go:89] found id: ""
	I0927 01:43:59.599348   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.599358   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:59.599363   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:59.599418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:59.634404   69333 cri.go:89] found id: ""
	I0927 01:43:59.634431   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.634441   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:59.634448   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:59.634498   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:59.672022   69333 cri.go:89] found id: ""
	I0927 01:43:59.672052   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.672066   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:59.672074   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:59.672134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:59.704617   69333 cri.go:89] found id: ""
	I0927 01:43:59.704647   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.704657   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:59.704664   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:59.704712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:59.740479   69333 cri.go:89] found id: ""
	I0927 01:43:59.740504   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.740512   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:59.740517   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:59.740579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:59.777123   69333 cri.go:89] found id: ""
	I0927 01:43:59.777155   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.777166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:59.777174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:59.777234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:59.817780   69333 cri.go:89] found id: ""
	I0927 01:43:59.817803   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.817825   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:59.817841   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:59.817856   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:59.831252   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:59.831278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:59.901912   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:59.901936   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:59.901949   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:59.983001   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:59.983034   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:00.030989   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:00.031020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:02.583949   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:02.596723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:02.596798   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:02.630927   69333 cri.go:89] found id: ""
	I0927 01:44:02.630953   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.630962   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:02.630967   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:02.631012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:02.664156   69333 cri.go:89] found id: ""
	I0927 01:44:02.664186   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.664198   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:02.664205   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:02.664259   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:02.698823   69333 cri.go:89] found id: ""
	I0927 01:44:02.698847   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.698860   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:02.698865   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:02.698913   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:02.736114   69333 cri.go:89] found id: ""
	I0927 01:44:02.736142   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.736154   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:02.736161   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:02.736221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:02.769739   69333 cri.go:89] found id: ""
	I0927 01:44:02.769763   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.769771   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:02.769785   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:02.769844   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:02.804798   69333 cri.go:89] found id: ""
	I0927 01:44:02.804871   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.804887   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:02.804898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:02.804958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:02.841197   69333 cri.go:89] found id: ""
	I0927 01:44:02.841224   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.841236   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:02.841243   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:02.841301   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:02.881278   69333 cri.go:89] found id: ""
	I0927 01:44:02.881310   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.881321   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:02.881331   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:02.881345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:02.935149   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:02.935183   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:02.950245   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:02.950273   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:03.020241   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:03.020263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:03.020277   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:03.104467   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:03.104503   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:05.643070   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:05.656656   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:05.656716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:05.694022   69333 cri.go:89] found id: ""
	I0927 01:44:05.694045   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.694053   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:05.694059   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:05.694123   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:05.728575   69333 cri.go:89] found id: ""
	I0927 01:44:05.728600   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.728607   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:05.728613   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:05.728667   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:05.768546   69333 cri.go:89] found id: ""
	I0927 01:44:05.768572   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.768583   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:05.768590   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:05.768652   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:05.809504   69333 cri.go:89] found id: ""
	I0927 01:44:05.809527   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.809536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:05.809543   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:05.809600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:05.846387   69333 cri.go:89] found id: ""
	I0927 01:44:05.846415   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.846422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:05.846428   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:05.846479   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:05.879579   69333 cri.go:89] found id: ""
	I0927 01:44:05.879608   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.879619   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:05.879626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:05.879684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:05.928932   69333 cri.go:89] found id: ""
	I0927 01:44:05.928961   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.928970   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:05.928978   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:05.929037   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:05.986463   69333 cri.go:89] found id: ""
	I0927 01:44:05.986490   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.986499   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:05.986507   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:05.986521   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:06.039984   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:06.040011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:06.053025   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:06.053051   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:06.127277   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:06.127316   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:06.127330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:06.201473   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:06.201504   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:08.739339   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:08.753354   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:08.753418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:08.788513   69333 cri.go:89] found id: ""
	I0927 01:44:08.788544   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.788556   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:08.788563   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:08.788648   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:08.824615   69333 cri.go:89] found id: ""
	I0927 01:44:08.824642   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.824653   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:08.824661   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:08.824724   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:08.858327   69333 cri.go:89] found id: ""
	I0927 01:44:08.858354   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.858365   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:08.858372   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:08.858430   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:08.896140   69333 cri.go:89] found id: ""
	I0927 01:44:08.896168   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.896175   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:08.896181   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:08.896229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:08.931525   69333 cri.go:89] found id: ""
	I0927 01:44:08.931547   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.931554   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:08.931560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:08.931618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:08.970224   69333 cri.go:89] found id: ""
	I0927 01:44:08.970246   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.970256   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:08.970263   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:08.970331   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:09.007213   69333 cri.go:89] found id: ""
	I0927 01:44:09.007240   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.007248   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:09.007255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:09.007334   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:09.043078   69333 cri.go:89] found id: ""
	I0927 01:44:09.043111   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.043122   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:09.043132   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:09.043147   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:09.096768   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:09.096801   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:09.110721   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:09.110747   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:09.182966   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:09.182990   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:09.183004   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:09.259497   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:09.259541   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:11.797307   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:11.812141   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:11.812196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:11.846429   69333 cri.go:89] found id: ""
	I0927 01:44:11.846468   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.846482   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:11.846489   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:11.846598   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:11.885294   69333 cri.go:89] found id: ""
	I0927 01:44:11.885322   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.885333   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:11.885339   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:11.885398   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:11.920856   69333 cri.go:89] found id: ""
	I0927 01:44:11.920884   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.920892   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:11.920898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:11.920946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:11.964540   69333 cri.go:89] found id: ""
	I0927 01:44:11.964564   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.964574   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:11.964581   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:11.964634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:12.000596   69333 cri.go:89] found id: ""
	I0927 01:44:12.000619   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.000629   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:12.000636   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:12.000697   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:12.037773   69333 cri.go:89] found id: ""
	I0927 01:44:12.037808   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.037819   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:12.037831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:12.037893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:12.074646   69333 cri.go:89] found id: ""
	I0927 01:44:12.074676   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.074687   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:12.074692   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:12.074740   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:12.111771   69333 cri.go:89] found id: ""
	I0927 01:44:12.111802   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.111813   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:12.111824   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:12.111837   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:12.160938   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:12.160971   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:12.175576   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:12.175605   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:12.245227   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:12.245263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:12.245278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:12.325161   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:12.325194   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:14.867795   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:14.881053   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:14.881130   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:14.915193   69333 cri.go:89] found id: ""
	I0927 01:44:14.915224   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.915234   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:14.915241   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:14.915318   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:14.951758   69333 cri.go:89] found id: ""
	I0927 01:44:14.951789   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.951801   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:14.951808   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:14.951860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:14.987875   69333 cri.go:89] found id: ""
	I0927 01:44:14.987906   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.987917   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:14.987924   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:14.987985   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:15.025780   69333 cri.go:89] found id: ""
	I0927 01:44:15.025810   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.025820   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:15.025828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:15.025884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:15.062135   69333 cri.go:89] found id: ""
	I0927 01:44:15.062157   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.062165   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:15.062172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:15.062225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:15.097090   69333 cri.go:89] found id: ""
	I0927 01:44:15.097112   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.097119   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:15.097126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:15.097170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:15.130528   69333 cri.go:89] found id: ""
	I0927 01:44:15.130552   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.130561   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:15.130569   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:15.130615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:15.165422   69333 cri.go:89] found id: ""
	I0927 01:44:15.165450   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.165457   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:15.165465   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:15.165474   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:15.214612   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:15.214651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:15.230294   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:15.230318   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:15.303339   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:15.303362   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:15.303375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:15.382046   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:15.382081   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:17.923331   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:17.937693   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:17.937765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:17.972677   69333 cri.go:89] found id: ""
	I0927 01:44:17.972699   69333 logs.go:276] 0 containers: []
	W0927 01:44:17.972707   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:17.972714   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:17.972764   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:18.004818   69333 cri.go:89] found id: ""
	I0927 01:44:18.004846   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.004854   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:18.004860   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:18.004907   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:18.044693   69333 cri.go:89] found id: ""
	I0927 01:44:18.044716   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.044723   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:18.044728   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:18.044772   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:18.079205   69333 cri.go:89] found id: ""
	I0927 01:44:18.079235   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.079244   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:18.079249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:18.079299   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:18.115272   69333 cri.go:89] found id: ""
	I0927 01:44:18.115322   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.115335   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:18.115343   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:18.115412   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:18.150165   69333 cri.go:89] found id: ""
	I0927 01:44:18.150195   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.150206   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:18.150213   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:18.150275   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:18.184971   69333 cri.go:89] found id: ""
	I0927 01:44:18.184999   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.185009   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:18.185016   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:18.185083   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:18.219955   69333 cri.go:89] found id: ""
	I0927 01:44:18.219985   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.219997   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:18.220008   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:18.220020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:18.269713   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:18.269748   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:18.285224   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:18.285251   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:18.364887   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:18.364912   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:18.364927   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:18.450667   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:18.450706   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:20.991648   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:21.006472   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:21.006529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:21.043455   69333 cri.go:89] found id: ""
	I0927 01:44:21.043476   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.043486   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:21.043493   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:21.043549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:21.080365   69333 cri.go:89] found id: ""
	I0927 01:44:21.080391   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.080399   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:21.080405   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:21.080449   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:21.117576   69333 cri.go:89] found id: ""
	I0927 01:44:21.117624   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.117636   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:21.117642   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:21.117703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:21.154538   69333 cri.go:89] found id: ""
	I0927 01:44:21.154564   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.154576   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:21.154584   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:21.154638   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:21.190046   69333 cri.go:89] found id: ""
	I0927 01:44:21.190070   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.190080   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:21.190086   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:21.190147   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:21.226383   69333 cri.go:89] found id: ""
	I0927 01:44:21.226407   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.226417   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:21.226424   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:21.226485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:21.262090   69333 cri.go:89] found id: ""
	I0927 01:44:21.262113   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.262124   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:21.262132   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:21.262188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:21.297675   69333 cri.go:89] found id: ""
	I0927 01:44:21.297697   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.297706   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:21.297716   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:21.297728   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:21.349668   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:21.349705   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:21.364608   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:21.364635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:21.432570   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:21.432596   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:21.432612   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:21.507616   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:21.507661   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:24.054212   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:24.067954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:24.068014   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:24.107017   69333 cri.go:89] found id: ""
	I0927 01:44:24.107045   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.107056   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:24.107063   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:24.107124   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:24.144373   69333 cri.go:89] found id: ""
	I0927 01:44:24.144398   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.144406   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:24.144411   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:24.144473   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:24.180010   69333 cri.go:89] found id: ""
	I0927 01:44:24.180038   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.180048   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:24.180056   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:24.180118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:24.214387   69333 cri.go:89] found id: ""
	I0927 01:44:24.214413   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.214421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:24.214426   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:24.214472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:24.252597   69333 cri.go:89] found id: ""
	I0927 01:44:24.252623   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.252631   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:24.252643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:24.252705   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.292044   69333 cri.go:89] found id: ""
	I0927 01:44:24.292072   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.292082   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:24.292089   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:24.292158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:24.329899   69333 cri.go:89] found id: ""
	I0927 01:44:24.329924   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.329934   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:24.329940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:24.329998   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:24.367964   69333 cri.go:89] found id: ""
	I0927 01:44:24.367989   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.368000   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:24.368010   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:24.368025   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:24.384151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:24.384184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:24.456916   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:24.456940   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:24.456958   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:24.539362   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:24.539399   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:24.578384   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:24.578411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.132700   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:27.146218   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.146294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.180958   69333 cri.go:89] found id: ""
	I0927 01:44:27.180984   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.180992   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:27.180997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.181043   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.215213   69333 cri.go:89] found id: ""
	I0927 01:44:27.215236   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.215243   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:27.215249   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.215293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.258192   69333 cri.go:89] found id: ""
	I0927 01:44:27.258216   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.258226   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:27.258233   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.258289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.292717   69333 cri.go:89] found id: ""
	I0927 01:44:27.292742   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.292753   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:27.292760   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.292818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.328038   69333 cri.go:89] found id: ""
	I0927 01:44:27.328066   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.328076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:27.328083   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.328152   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:27.363513   69333 cri.go:89] found id: ""
	I0927 01:44:27.363539   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.363548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:27.363553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.363610   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.402201   69333 cri.go:89] found id: ""
	I0927 01:44:27.402223   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.402231   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:27.402237   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.402290   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.436952   69333 cri.go:89] found id: ""
	I0927 01:44:27.436979   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.436987   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:27.436995   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.437009   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.487908   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:27.487938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:27.502170   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:27.502199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:27.583909   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:27.583931   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:27.583943   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:27.660248   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:27.660286   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:30.201211   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:30.214276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:30.214350   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:30.252445   69333 cri.go:89] found id: ""
	I0927 01:44:30.252474   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.252484   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:30.252490   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:30.252538   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:30.287574   69333 cri.go:89] found id: ""
	I0927 01:44:30.287603   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.287614   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:30.287621   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:30.287693   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:30.324674   69333 cri.go:89] found id: ""
	I0927 01:44:30.324699   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.324711   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:30.324718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:30.324779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:30.360493   69333 cri.go:89] found id: ""
	I0927 01:44:30.360521   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.360531   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:30.360539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:30.360640   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:30.396219   69333 cri.go:89] found id: ""
	I0927 01:44:30.396252   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.396263   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:30.396270   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:30.396328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:30.431524   69333 cri.go:89] found id: ""
	I0927 01:44:30.431546   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.431558   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:30.431564   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:30.431607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:30.465887   69333 cri.go:89] found id: ""
	I0927 01:44:30.465915   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.465926   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:30.465933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:30.466000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:30.501364   69333 cri.go:89] found id: ""
	I0927 01:44:30.501391   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.501402   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:30.501411   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:30.501425   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:30.556344   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:30.556377   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:30.572619   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:30.572649   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:30.645996   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:30.646020   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:30.646032   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:30.737458   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:30.737531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:33.284306   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:33.298164   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:33.298224   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:33.334599   69333 cri.go:89] found id: ""
	I0927 01:44:33.334625   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.334634   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:33.334654   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:33.334718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:33.369006   69333 cri.go:89] found id: ""
	I0927 01:44:33.369034   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.369044   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:33.369051   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:33.369119   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:33.407875   69333 cri.go:89] found id: ""
	I0927 01:44:33.407904   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.407912   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:33.407918   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:33.407974   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:33.441048   69333 cri.go:89] found id: ""
	I0927 01:44:33.441083   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.441094   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:33.441101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:33.441156   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:33.478458   69333 cri.go:89] found id: ""
	I0927 01:44:33.478503   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.478515   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:33.478522   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:33.478586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:33.513756   69333 cri.go:89] found id: ""
	I0927 01:44:33.513784   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.513795   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:33.513802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:33.513862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:33.554351   69333 cri.go:89] found id: ""
	I0927 01:44:33.554392   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.554403   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:33.554410   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:33.554472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:33.588484   69333 cri.go:89] found id: ""
	I0927 01:44:33.588512   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.588533   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:33.588544   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:33.588559   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:33.665735   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:33.665775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:33.704654   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:33.704687   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:33.755444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:33.755475   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:33.770069   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:33.770095   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:33.841531   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.341963   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:36.355219   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:36.355294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:36.395149   69333 cri.go:89] found id: ""
	I0927 01:44:36.395185   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.395196   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:36.395203   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:36.395262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:36.434620   69333 cri.go:89] found id: ""
	I0927 01:44:36.434649   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.434661   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:36.434667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:36.434729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:36.468328   69333 cri.go:89] found id: ""
	I0927 01:44:36.468349   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.468357   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:36.468362   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:36.468427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:36.506386   69333 cri.go:89] found id: ""
	I0927 01:44:36.506413   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.506421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:36.506427   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:36.506482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:36.546583   69333 cri.go:89] found id: ""
	I0927 01:44:36.546607   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.546614   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:36.546620   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:36.546665   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:36.581694   69333 cri.go:89] found id: ""
	I0927 01:44:36.581721   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.581730   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:36.581737   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:36.581782   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:36.617775   69333 cri.go:89] found id: ""
	I0927 01:44:36.617799   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.617807   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:36.617813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:36.617877   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:36.654443   69333 cri.go:89] found id: ""
	I0927 01:44:36.654470   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.654478   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:36.654486   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:36.654496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:36.705787   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:36.705817   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:36.720643   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:36.720677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:36.800037   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.800061   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:36.800091   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:36.886845   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:36.886884   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:39.429349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:39.442899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:39.442973   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:39.481752   69333 cri.go:89] found id: ""
	I0927 01:44:39.481782   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.481793   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:39.481799   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:39.481858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:39.516074   69333 cri.go:89] found id: ""
	I0927 01:44:39.516103   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.516114   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:39.516130   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:39.516188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.563351   69333 cri.go:89] found id: ""
	I0927 01:44:39.563375   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.563386   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:39.563392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.563455   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.601417   69333 cri.go:89] found id: ""
	I0927 01:44:39.601445   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.601455   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:39.601469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.601529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.634537   69333 cri.go:89] found id: ""
	I0927 01:44:39.634565   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.634576   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:39.634582   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.634642   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.668910   69333 cri.go:89] found id: ""
	I0927 01:44:39.668937   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.668948   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:39.668955   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.669013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.701992   69333 cri.go:89] found id: ""
	I0927 01:44:39.702014   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.702021   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:39.702027   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.702074   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.741579   69333 cri.go:89] found id: ""
	I0927 01:44:39.741601   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.741610   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:39.741618   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:39.741627   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:39.806476   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.806510   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.820228   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:39.820255   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:39.893137   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:39.893167   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.893181   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.974477   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.974514   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:42.517449   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:42.532200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:42.532266   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:42.568872   69333 cri.go:89] found id: ""
	I0927 01:44:42.568901   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.568911   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:42.568919   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:42.568980   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:42.605069   69333 cri.go:89] found id: ""
	I0927 01:44:42.605220   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.605251   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:42.605261   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:42.605335   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:42.641637   69333 cri.go:89] found id: ""
	I0927 01:44:42.641665   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.641673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:42.641680   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:42.641742   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:42.677333   69333 cri.go:89] found id: ""
	I0927 01:44:42.677361   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.677376   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:42.677382   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:42.677439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:42.712456   69333 cri.go:89] found id: ""
	I0927 01:44:42.712484   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.712495   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:42.712501   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:42.712565   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:42.745109   69333 cri.go:89] found id: ""
	I0927 01:44:42.745140   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.745150   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:42.745157   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:42.745226   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:42.779427   69333 cri.go:89] found id: ""
	I0927 01:44:42.779449   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.779457   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:42.779462   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:42.779508   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:42.823920   69333 cri.go:89] found id: ""
	I0927 01:44:42.823946   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.823954   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:42.823963   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:42.823972   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:42.881345   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:42.881380   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:42.896076   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:42.896100   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:42.971775   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:42.971796   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:42.971809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:43.054461   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:43.054494   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.596681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:45.610817   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:45.610882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:45.647628   69333 cri.go:89] found id: ""
	I0927 01:44:45.647654   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.647662   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:45.647668   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:45.647715   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:45.685480   69333 cri.go:89] found id: ""
	I0927 01:44:45.685507   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.685514   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:45.685520   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:45.685573   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:45.721601   69333 cri.go:89] found id: ""
	I0927 01:44:45.721624   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.721632   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:45.721637   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:45.721700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:45.756763   69333 cri.go:89] found id: ""
	I0927 01:44:45.756788   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.756796   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:45.756802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:45.756858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:45.792891   69333 cri.go:89] found id: ""
	I0927 01:44:45.792917   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.792927   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:45.792934   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:45.792996   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:45.828716   69333 cri.go:89] found id: ""
	I0927 01:44:45.828739   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.828747   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:45.828753   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:45.828807   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:45.868813   69333 cri.go:89] found id: ""
	I0927 01:44:45.868840   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.868848   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:45.868853   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:45.868905   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:45.907281   69333 cri.go:89] found id: ""
	I0927 01:44:45.907327   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.907341   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:45.907352   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:45.907371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:45.958539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:45.958574   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:45.972540   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:45.972567   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:46.046083   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:46.046124   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:46.046141   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:46.124313   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:46.124349   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:48.673701   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:48.687673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:48.687744   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:48.722269   69333 cri.go:89] found id: ""
	I0927 01:44:48.722291   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.722302   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:48.722308   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:48.722370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:48.758297   69333 cri.go:89] found id: ""
	I0927 01:44:48.758318   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.758326   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:48.758331   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:48.758377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:48.792706   69333 cri.go:89] found id: ""
	I0927 01:44:48.792730   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.792738   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:48.792744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:48.792792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:48.827015   69333 cri.go:89] found id: ""
	I0927 01:44:48.827035   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.827047   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:48.827052   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:48.827095   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:48.862538   69333 cri.go:89] found id: ""
	I0927 01:44:48.862564   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.862572   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:48.862577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:48.862632   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:48.896118   69333 cri.go:89] found id: ""
	I0927 01:44:48.896144   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.896154   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:48.896166   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:48.896225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:48.932483   69333 cri.go:89] found id: ""
	I0927 01:44:48.932511   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.932519   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:48.932524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:48.932576   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:48.971864   69333 cri.go:89] found id: ""
	I0927 01:44:48.971890   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.971898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:48.971906   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:48.971919   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:49.028163   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:49.028199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:49.042780   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:49.042805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:49.116454   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:49.116476   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:49.116491   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.196048   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:49.196084   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:51.735108   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:51.749191   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:51.749258   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:51.784776   69333 cri.go:89] found id: ""
	I0927 01:44:51.784804   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.784815   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:51.784823   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:51.784880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:51.822807   69333 cri.go:89] found id: ""
	I0927 01:44:51.822836   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.822847   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:51.822854   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:51.822912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:51.858700   69333 cri.go:89] found id: ""
	I0927 01:44:51.858726   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.858737   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:51.858744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:51.858812   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:51.894945   69333 cri.go:89] found id: ""
	I0927 01:44:51.894968   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.894975   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:51.894980   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:51.895025   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:51.939475   69333 cri.go:89] found id: ""
	I0927 01:44:51.939503   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.939518   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:51.939524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:51.939569   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:51.982626   69333 cri.go:89] found id: ""
	I0927 01:44:51.982654   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.982665   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:51.982673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:51.982731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:52.050446   69333 cri.go:89] found id: ""
	I0927 01:44:52.050473   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.050483   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:52.050490   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:52.050549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:52.092637   69333 cri.go:89] found id: ""
	I0927 01:44:52.092666   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.092676   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:52.092686   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:52.092700   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:52.132135   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:52.132165   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:52.186537   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:52.186572   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:52.200001   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:52.200027   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:52.282068   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:52.282093   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:52.282108   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:54.866565   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:54.880400   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:54.880460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:54.918963   69333 cri.go:89] found id: ""
	I0927 01:44:54.919004   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.919027   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:54.919036   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:54.919107   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:54.959918   69333 cri.go:89] found id: ""
	I0927 01:44:54.959947   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.959958   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:54.959965   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:54.960026   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:55.004348   69333 cri.go:89] found id: ""
	I0927 01:44:55.004370   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.004378   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:55.004392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:55.004446   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:55.045190   69333 cri.go:89] found id: ""
	I0927 01:44:55.045213   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.045220   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:55.045225   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:55.045278   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:55.087638   69333 cri.go:89] found id: ""
	I0927 01:44:55.087663   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.087671   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:55.087677   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:55.087739   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:55.126899   69333 cri.go:89] found id: ""
	I0927 01:44:55.126932   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.126943   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:55.126951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:55.127012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:55.167593   69333 cri.go:89] found id: ""
	I0927 01:44:55.167624   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.167635   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:55.167643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:55.167706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:55.208362   69333 cri.go:89] found id: ""
	I0927 01:44:55.208388   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.208399   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:55.208409   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:55.208424   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:55.247198   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:55.247221   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:55.299408   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:55.299443   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:55.315745   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:55.315775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:55.387499   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:55.387523   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:55.387539   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:57.968863   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:57.987921   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:57.987988   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:58.036770   69333 cri.go:89] found id: ""
	I0927 01:44:58.036802   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.036813   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:58.036824   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:58.036878   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:58.072461   69333 cri.go:89] found id: ""
	I0927 01:44:58.072484   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.072492   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:58.072499   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:58.072551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:58.107247   69333 cri.go:89] found id: ""
	I0927 01:44:58.107273   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.107284   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:58.107290   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:58.107365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:58.149050   69333 cri.go:89] found id: ""
	I0927 01:44:58.149080   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.149091   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:58.149099   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:58.149162   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:58.188167   69333 cri.go:89] found id: ""
	I0927 01:44:58.188198   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.188209   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:58.188217   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:58.188283   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:58.224291   69333 cri.go:89] found id: ""
	I0927 01:44:58.224319   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.224329   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:58.224337   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:58.224401   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:58.258786   69333 cri.go:89] found id: ""
	I0927 01:44:58.258813   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.258822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:58.258828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:58.258885   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:58.298310   69333 cri.go:89] found id: ""
	I0927 01:44:58.298338   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.298349   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:58.298359   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:58.298373   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:58.340299   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:58.340330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.395097   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:58.395130   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:58.410653   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:58.410677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:58.479437   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:58.479459   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:58.479470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.057473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:01.071746   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:01.071818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:01.112652   69333 cri.go:89] found id: ""
	I0927 01:45:01.112676   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.112684   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:01.112690   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:01.112735   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:01.146071   69333 cri.go:89] found id: ""
	I0927 01:45:01.146100   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.146111   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:01.146119   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:01.146188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:01.188640   69333 cri.go:89] found id: ""
	I0927 01:45:01.188663   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.188673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:01.188679   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:01.188743   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:01.225024   69333 cri.go:89] found id: ""
	I0927 01:45:01.225050   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.225060   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:01.225067   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:01.225128   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:01.262459   69333 cri.go:89] found id: ""
	I0927 01:45:01.262487   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.262498   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:01.262505   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:01.262560   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:01.298567   69333 cri.go:89] found id: ""
	I0927 01:45:01.298588   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.298597   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:01.298603   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:01.298647   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:01.335051   69333 cri.go:89] found id: ""
	I0927 01:45:01.335084   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.335094   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:01.335100   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:01.335149   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:01.371187   69333 cri.go:89] found id: ""
	I0927 01:45:01.371217   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.371227   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:01.371237   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:01.371252   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:01.385163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:01.385189   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:01.457256   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:01.457298   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:01.457313   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.537788   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:01.537819   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:01.580645   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:01.580672   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:04.131877   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:04.145175   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:04.145248   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:04.179508   69333 cri.go:89] found id: ""
	I0927 01:45:04.179535   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.179545   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:04.179552   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:04.179612   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:04.213497   69333 cri.go:89] found id: ""
	I0927 01:45:04.213533   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.213544   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:04.213551   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:04.213606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:04.249708   69333 cri.go:89] found id: ""
	I0927 01:45:04.249737   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.249747   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:04.249754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:04.249824   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:04.288283   69333 cri.go:89] found id: ""
	I0927 01:45:04.288306   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.288314   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:04.288319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:04.288368   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:04.325515   69333 cri.go:89] found id: ""
	I0927 01:45:04.325539   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.325549   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:04.325560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:04.325618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:04.363485   69333 cri.go:89] found id: ""
	I0927 01:45:04.363511   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.363521   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:04.363528   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:04.363586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:04.398834   69333 cri.go:89] found id: ""
	I0927 01:45:04.398863   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.398875   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:04.398882   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:04.398948   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:04.433408   69333 cri.go:89] found id: ""
	I0927 01:45:04.433435   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.433443   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:04.433451   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:04.433461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:04.485354   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:04.485392   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:04.499007   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:04.499031   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:04.569376   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:04.569405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:04.569420   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:04.646614   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:04.646651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.186491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:07.200510   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:07.200575   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:07.239519   69333 cri.go:89] found id: ""
	I0927 01:45:07.239542   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.239553   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:07.239562   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:07.239751   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:07.276820   69333 cri.go:89] found id: ""
	I0927 01:45:07.276854   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.276863   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:07.276870   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:07.276932   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:07.312580   69333 cri.go:89] found id: ""
	I0927 01:45:07.312604   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.312613   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:07.312619   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:07.312676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:07.350763   69333 cri.go:89] found id: ""
	I0927 01:45:07.350788   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.350799   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:07.350806   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:07.350861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:07.385347   69333 cri.go:89] found id: ""
	I0927 01:45:07.385376   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.385383   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:07.385389   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:07.385439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:07.420665   69333 cri.go:89] found id: ""
	I0927 01:45:07.420696   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.420708   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:07.420718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:07.420768   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:07.453707   69333 cri.go:89] found id: ""
	I0927 01:45:07.453737   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.453746   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:07.453752   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:07.453806   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:07.489467   69333 cri.go:89] found id: ""
	I0927 01:45:07.489497   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.489508   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:07.489520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:07.489531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:07.569464   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:07.569496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.609123   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:07.609160   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:07.659556   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:07.659590   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:07.673163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:07.673191   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:07.751340   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.252511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:10.266651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:10.266706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:10.304131   69333 cri.go:89] found id: ""
	I0927 01:45:10.304160   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.304171   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:10.304178   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:10.304243   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:10.339267   69333 cri.go:89] found id: ""
	I0927 01:45:10.339295   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.339321   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:10.339329   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:10.339397   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:10.376268   69333 cri.go:89] found id: ""
	I0927 01:45:10.376298   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.376308   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:10.376319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:10.376380   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:10.413944   69333 cri.go:89] found id: ""
	I0927 01:45:10.413970   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.413978   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:10.413984   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:10.414033   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:10.449205   69333 cri.go:89] found id: ""
	I0927 01:45:10.449226   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.449234   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:10.449240   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:10.449289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:10.487927   69333 cri.go:89] found id: ""
	I0927 01:45:10.487947   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.487955   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:10.487961   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:10.488018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:10.525062   69333 cri.go:89] found id: ""
	I0927 01:45:10.525085   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.525095   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:10.525102   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:10.525163   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:10.560718   69333 cri.go:89] found id: ""
	I0927 01:45:10.560768   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.560779   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:10.560790   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:10.560803   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:10.641755   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.641781   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:10.641796   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:10.719775   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:10.719807   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:10.761952   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:10.761978   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:10.815296   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:10.815330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:13.330300   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:13.343840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:13.343893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:13.378904   69333 cri.go:89] found id: ""
	I0927 01:45:13.378933   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.378944   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:13.378952   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:13.379010   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:13.417375   69333 cri.go:89] found id: ""
	I0927 01:45:13.417403   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.417415   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:13.417422   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:13.417482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:13.456265   69333 cri.go:89] found id: ""
	I0927 01:45:13.456291   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.456302   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:13.456310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:13.456358   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:13.502205   69333 cri.go:89] found id: ""
	I0927 01:45:13.502229   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.502240   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:13.502247   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:13.502310   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:13.543617   69333 cri.go:89] found id: ""
	I0927 01:45:13.543642   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.543652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:13.543660   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:13.543723   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:13.580268   69333 cri.go:89] found id: ""
	I0927 01:45:13.580295   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.580305   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:13.580313   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:13.580374   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:13.616681   69333 cri.go:89] found id: ""
	I0927 01:45:13.616705   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.616713   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:13.616718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:13.616765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:13.653389   69333 cri.go:89] found id: ""
	I0927 01:45:13.653412   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.653420   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:13.653430   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:13.653442   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:13.666511   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:13.666534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:13.742282   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:13.742300   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:13.742311   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:13.825800   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:13.825836   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:13.876345   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:13.876376   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.429245   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:16.443286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:16.443366   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:16.481601   69333 cri.go:89] found id: ""
	I0927 01:45:16.481626   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.481637   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:16.481645   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:16.481703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:16.513626   69333 cri.go:89] found id: ""
	I0927 01:45:16.513652   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.513659   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:16.513665   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:16.513710   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:16.552531   69333 cri.go:89] found id: ""
	I0927 01:45:16.552565   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.552574   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:16.552580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:16.552636   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:16.587252   69333 cri.go:89] found id: ""
	I0927 01:45:16.587282   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.587294   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:16.587316   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:16.587377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:16.628376   69333 cri.go:89] found id: ""
	I0927 01:45:16.628401   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.628410   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:16.628417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:16.628482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:16.669603   69333 cri.go:89] found id: ""
	I0927 01:45:16.669639   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.669651   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:16.669658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:16.669731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:16.705581   69333 cri.go:89] found id: ""
	I0927 01:45:16.705607   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.705618   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:16.705626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:16.705682   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:16.740710   69333 cri.go:89] found id: ""
	I0927 01:45:16.740735   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.740743   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:16.740759   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:16.740771   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.791025   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:16.791060   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:16.805990   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:16.806023   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:16.878313   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:16.878331   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:16.878346   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:16.966228   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:16.966269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:19.512044   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:19.526801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:19.526862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:19.562063   69333 cri.go:89] found id: ""
	I0927 01:45:19.562089   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.562098   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:19.562104   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:19.562159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:19.598600   69333 cri.go:89] found id: ""
	I0927 01:45:19.598626   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.598634   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:19.598642   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:19.598712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:19.632544   69333 cri.go:89] found id: ""
	I0927 01:45:19.632564   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.632572   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:19.632577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:19.632635   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:19.671676   69333 cri.go:89] found id: ""
	I0927 01:45:19.671703   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.671713   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:19.671721   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:19.671779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:19.710321   69333 cri.go:89] found id: ""
	I0927 01:45:19.710351   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.710362   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:19.710370   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:19.710438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:19.746252   69333 cri.go:89] found id: ""
	I0927 01:45:19.746277   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.746288   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:19.746295   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:19.746354   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:19.783089   69333 cri.go:89] found id: ""
	I0927 01:45:19.783112   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.783121   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:19.783126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:19.783189   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:19.821090   69333 cri.go:89] found id: ""
	I0927 01:45:19.821117   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.821126   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:19.821134   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:19.821145   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:19.873539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:19.873575   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:19.888446   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:19.888471   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:19.958009   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:19.958034   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:19.958050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:20.037552   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:20.037587   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:22.579288   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:22.592789   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:22.592846   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:22.628148   69333 cri.go:89] found id: ""
	I0927 01:45:22.628178   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.628186   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:22.628193   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:22.628240   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:22.664162   69333 cri.go:89] found id: ""
	I0927 01:45:22.664186   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.664194   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:22.664200   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:22.664253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:22.702077   69333 cri.go:89] found id: ""
	I0927 01:45:22.702104   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.702115   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:22.702123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:22.702183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:22.739657   69333 cri.go:89] found id: ""
	I0927 01:45:22.739690   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.739700   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:22.739708   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:22.739773   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:22.774109   69333 cri.go:89] found id: ""
	I0927 01:45:22.774137   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.774148   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:22.774174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:22.774229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:22.809648   69333 cri.go:89] found id: ""
	I0927 01:45:22.809671   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.809678   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:22.809684   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:22.809729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:22.842598   69333 cri.go:89] found id: ""
	I0927 01:45:22.842620   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.842627   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:22.842632   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:22.842677   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:22.877336   69333 cri.go:89] found id: ""
	I0927 01:45:22.877364   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.877374   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:22.877382   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:22.877393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:22.930364   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:22.930395   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:22.944174   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:22.944200   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:23.025495   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:23.025520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:23.025534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:23.101813   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:23.101850   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:25.644577   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:25.657820   69333 kubeadm.go:597] duration metric: took 4m3.277962916s to restartPrimaryControlPlane
	W0927 01:45:25.657898   69333 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:25.657929   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:26.111439   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:26.128279   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:26.138354   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:26.148116   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:26.148132   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:26.148170   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:26.157965   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:26.158012   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:26.168349   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:26.177624   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:26.177692   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:26.187584   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.196800   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:26.196856   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.205894   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:26.215316   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:26.215365   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:26.224989   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:26.299149   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:45:26.299261   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:26.451113   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:26.451282   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:26.451457   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:45:26.637960   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:26.640682   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:26.640782   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:26.640865   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:26.640972   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:26.641099   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:26.641233   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:26.641317   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:26.641425   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:26.641525   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:26.641633   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:26.641901   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:26.642000   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:26.642080   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:26.782585   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:27.008743   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:27.103701   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:27.217999   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:27.238810   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:27.240191   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:27.240240   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:27.375215   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:27.376992   69333 out.go:235]   - Booting up control plane ...
	I0927 01:45:27.377123   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:27.386897   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:27.387959   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:27.388954   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:27.392182   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:46:07.393548   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:46:07.394304   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:07.394505   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:12.395176   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:12.395434   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:22.395858   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:22.396073   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:42.397038   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:42.397331   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:47:22.398756   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:47:22.399035   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:47:22.399051   69333 kubeadm.go:310] 
	I0927 01:47:22.399125   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:47:22.399167   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:47:22.399176   69333 kubeadm.go:310] 
	I0927 01:47:22.399242   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:47:22.399326   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:47:22.399452   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:47:22.399464   69333 kubeadm.go:310] 
	I0927 01:47:22.399627   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:47:22.399702   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:47:22.399750   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:47:22.399763   69333 kubeadm.go:310] 
	I0927 01:47:22.399908   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:47:22.400001   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:47:22.400014   69333 kubeadm.go:310] 
	I0927 01:47:22.400109   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:47:22.400218   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:47:22.400331   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:47:22.400406   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:47:22.400414   69333 kubeadm.go:310] 
	I0927 01:47:22.401157   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:47:22.401273   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:47:22.401342   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:47:22.401458   69333 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:47:22.401498   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:47:22.863316   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:47:22.878664   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:47:22.889118   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:47:22.889135   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:47:22.889173   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:47:22.898966   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:47:22.899035   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:47:22.911280   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:47:22.920628   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:47:22.920677   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:47:22.929860   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.938794   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:47:22.938839   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.947982   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:47:22.956785   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:47:22.956837   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:47:22.966186   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:47:23.039915   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:47:23.040017   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:47:23.189097   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:47:23.189274   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:47:23.189395   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:47:23.400731   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:47:23.402659   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:47:23.402776   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:47:23.402855   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:47:23.402959   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:47:23.403040   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:47:23.403162   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:47:23.403349   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:47:23.403935   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:47:23.404260   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:47:23.404563   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:47:23.404896   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:47:23.405050   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:47:23.405121   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:47:23.466908   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:47:23.717009   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:47:23.766225   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:47:23.961488   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:47:23.987846   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:47:23.988724   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:47:23.988790   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:47:24.130550   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:47:24.132276   69333 out.go:235]   - Booting up control plane ...
	I0927 01:47:24.132386   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:47:24.146415   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:47:24.147664   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:47:24.148443   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:47:24.151623   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:48:04.153587   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:48:04.153934   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:04.154129   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:09.154634   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:09.154883   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:19.155638   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:19.155844   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:39.156224   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:39.156429   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155507   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:49:19.155754   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155779   69333 kubeadm.go:310] 
	I0927 01:49:19.155872   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:49:19.155947   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:49:19.155958   69333 kubeadm.go:310] 
	I0927 01:49:19.156026   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:49:19.156077   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:49:19.156230   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:49:19.156242   69333 kubeadm.go:310] 
	I0927 01:49:19.156379   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:49:19.156434   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:49:19.156486   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:49:19.156506   69333 kubeadm.go:310] 
	I0927 01:49:19.156628   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:49:19.156756   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:49:19.156775   69333 kubeadm.go:310] 
	I0927 01:49:19.156925   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:49:19.157022   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:49:19.157112   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:49:19.157191   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:49:19.157202   69333 kubeadm.go:310] 
	I0927 01:49:19.158023   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:49:19.158149   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:49:19.158277   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:49:19.158357   69333 kubeadm.go:394] duration metric: took 7m56.829434682s to StartCluster
	I0927 01:49:19.158404   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:49:19.158477   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:49:19.200705   69333 cri.go:89] found id: ""
	I0927 01:49:19.200729   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.200736   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:49:19.200742   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:49:19.200791   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:49:19.240252   69333 cri.go:89] found id: ""
	I0927 01:49:19.240274   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.240285   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:49:19.240292   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:49:19.240352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:49:19.275802   69333 cri.go:89] found id: ""
	I0927 01:49:19.275826   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.275834   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:49:19.275840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:49:19.275894   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:49:19.309317   69333 cri.go:89] found id: ""
	I0927 01:49:19.309342   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.309350   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:49:19.309357   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:49:19.309414   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:49:19.344778   69333 cri.go:89] found id: ""
	I0927 01:49:19.344806   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.344817   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:49:19.344823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:49:19.344882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:49:19.379394   69333 cri.go:89] found id: ""
	I0927 01:49:19.379426   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.379438   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:49:19.379445   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:49:19.379502   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:49:19.415349   69333 cri.go:89] found id: ""
	I0927 01:49:19.415376   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.415384   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:49:19.415390   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:49:19.415438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:49:19.453357   69333 cri.go:89] found id: ""
	I0927 01:49:19.453381   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.453389   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:49:19.453397   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:49:19.453409   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:49:19.530384   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:49:19.530405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:49:19.530423   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:49:19.643418   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:49:19.643453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:49:19.688825   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:49:19.688861   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:49:19.745945   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:49:19.745983   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0927 01:49:19.762685   69333 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:49:19.762739   69333 out.go:270] * 
	* 
	W0927 01:49:19.762791   69333 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.762804   69333 out.go:270] * 
	* 
	W0927 01:49:19.763605   69333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:49:19.767393   69333 out.go:201] 
	W0927 01:49:19.768622   69333 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.768671   69333 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:49:19.768690   69333 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:49:19.771036   69333 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-612261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 2 (225.285709ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-612261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-612261 logs -n 25: (1.670626264s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| stop    | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:32 UTC |
	| start   | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:33 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-521072             | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-595331                              | cert-expiration-595331       | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| delete  | -p                                                     | disable-driver-mounts-630210 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | disable-driver-mounts-630210                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:35 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-245911            | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-368295  | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-521072                  | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-612261        | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-245911                 | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-612261             | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-368295       | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:37:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:37:48.335921   69534 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:37:48.336188   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336196   69534 out.go:358] Setting ErrFile to fd 2...
	I0927 01:37:48.336201   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336368   69534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:37:48.336901   69534 out.go:352] Setting JSON to false
	I0927 01:37:48.337754   69534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8413,"bootTime":1727392655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:37:48.337841   69534 start.go:139] virtualization: kvm guest
	I0927 01:37:48.340035   69534 out.go:177] * [default-k8s-diff-port-368295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:37:48.341151   69534 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:37:48.341211   69534 notify.go:220] Checking for updates...
	I0927 01:37:48.343607   69534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:37:48.344933   69534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:37:48.346113   69534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:37:48.347142   69534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:37:48.348261   69534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:37:48.349842   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:37:48.350212   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.350278   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.365272   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0927 01:37:48.365662   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.366137   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.366162   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.366548   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.366713   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.366938   69534 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:37:48.367236   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.367265   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.381678   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0927 01:37:48.382169   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.382627   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.382650   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.382911   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.383023   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.415092   69534 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:37:48.416340   69534 start.go:297] selected driver: kvm2
	I0927 01:37:48.416354   69534 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.416459   69534 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:37:48.417093   69534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.417164   69534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:37:48.432138   69534 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:37:48.432534   69534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:37:48.432563   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:37:48.432604   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:37:48.432635   69534 start.go:340] cluster config:
	{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.432737   69534 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.435057   69534 out.go:177] * Starting "default-k8s-diff-port-368295" primary control-plane node in "default-k8s-diff-port-368295" cluster
	I0927 01:37:48.436502   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:37:48.436543   69534 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 01:37:48.436557   69534 cache.go:56] Caching tarball of preloaded images
	I0927 01:37:48.436624   69534 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:37:48.436634   69534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 01:37:48.436718   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:37:48.436885   69534 start.go:360] acquireMachinesLock for default-k8s-diff-port-368295: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:37:50.823565   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:53.895575   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:59.975554   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:03.047567   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:09.127558   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:12.199592   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:18.279516   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:21.351643   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:27.435515   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:30.503604   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:36.583590   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:39.655593   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:45.735581   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:48.807587   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:54.887542   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:57.959601   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:04.039570   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:07.111555   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:13.191559   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:16.263625   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:22.343607   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:25.415561   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:31.495531   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:34.567598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:40.647577   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:43.719602   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:49.799620   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:52.871596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:58.951600   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:02.023635   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:08.103596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:11.175614   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:17.255583   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:20.327522   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:26.407598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:29.479580   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:32.484148   69234 start.go:364] duration metric: took 3m6.827897292s to acquireMachinesLock for "embed-certs-245911"
	I0927 01:40:32.484202   69234 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:32.484210   69234 fix.go:54] fixHost starting: 
	I0927 01:40:32.484708   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:32.484758   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:32.500356   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0927 01:40:32.500869   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:32.501356   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:40:32.501376   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:32.501678   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:32.501872   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:32.502014   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:40:32.503863   69234 fix.go:112] recreateIfNeeded on embed-certs-245911: state=Stopped err=<nil>
	I0927 01:40:32.503884   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	W0927 01:40:32.504047   69234 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:32.506829   69234 out.go:177] * Restarting existing kvm2 VM for "embed-certs-245911" ...
	I0927 01:40:32.481407   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:32.481445   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.481786   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:40:32.481815   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.482031   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:40:32.483999   68676 machine.go:96] duration metric: took 4m37.428764548s to provisionDockerMachine
	I0927 01:40:32.484048   68676 fix.go:56] duration metric: took 4m37.449461246s for fixHost
	I0927 01:40:32.484055   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 4m37.449534693s
	W0927 01:40:32.484075   68676 start.go:714] error starting host: provision: host is not running
	W0927 01:40:32.484176   68676 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0927 01:40:32.484183   68676 start.go:729] Will try again in 5 seconds ...
	I0927 01:40:32.508417   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Start
	I0927 01:40:32.508598   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring networks are active...
	I0927 01:40:32.509477   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network default is active
	I0927 01:40:32.509830   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network mk-embed-certs-245911 is active
	I0927 01:40:32.510208   69234 main.go:141] libmachine: (embed-certs-245911) Getting domain xml...
	I0927 01:40:32.510838   69234 main.go:141] libmachine: (embed-certs-245911) Creating domain...
	I0927 01:40:33.718381   69234 main.go:141] libmachine: (embed-certs-245911) Waiting to get IP...
	I0927 01:40:33.719223   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.719554   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.719611   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.719550   70125 retry.go:31] will retry after 265.21442ms: waiting for machine to come up
	I0927 01:40:33.986199   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.986700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.986728   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.986658   70125 retry.go:31] will retry after 308.926274ms: waiting for machine to come up
	I0927 01:40:34.297317   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.297734   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.297755   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.297697   70125 retry.go:31] will retry after 466.52815ms: waiting for machine to come up
	I0927 01:40:34.765171   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.765616   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.765643   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.765570   70125 retry.go:31] will retry after 510.417499ms: waiting for machine to come up
	I0927 01:40:35.277175   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.277547   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.277576   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.277488   70125 retry.go:31] will retry after 522.865286ms: waiting for machine to come up
	I0927 01:40:37.485696   68676 start.go:360] acquireMachinesLock for no-preload-521072: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:40:35.802177   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.802620   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.802646   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.802584   70125 retry.go:31] will retry after 611.490499ms: waiting for machine to come up
	I0927 01:40:36.415249   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:36.415733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:36.415793   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:36.415709   70125 retry.go:31] will retry after 744.420766ms: waiting for machine to come up
	I0927 01:40:37.161647   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:37.162076   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:37.162112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:37.162022   70125 retry.go:31] will retry after 1.464523837s: waiting for machine to come up
	I0927 01:40:38.627935   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:38.628275   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:38.628302   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:38.628237   70125 retry.go:31] will retry after 1.840524237s: waiting for machine to come up
	I0927 01:40:40.471433   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:40.471823   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:40.471851   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:40.471781   70125 retry.go:31] will retry after 1.9424331s: waiting for machine to come up
	I0927 01:40:42.416527   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:42.416978   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:42.417007   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:42.416935   70125 retry.go:31] will retry after 2.553410529s: waiting for machine to come up
	I0927 01:40:44.973083   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:44.973446   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:44.973465   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:44.973402   70125 retry.go:31] will retry after 3.286267983s: waiting for machine to come up
	I0927 01:40:48.260792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:48.261216   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:48.261241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:48.261179   70125 retry.go:31] will retry after 3.302667041s: waiting for machine to come up
	I0927 01:40:52.800240   69333 start.go:364] duration metric: took 3m25.347970249s to acquireMachinesLock for "old-k8s-version-612261"
	I0927 01:40:52.800310   69333 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:52.800317   69333 fix.go:54] fixHost starting: 
	I0927 01:40:52.800742   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:52.800800   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:52.818217   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0927 01:40:52.818644   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:52.819065   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:40:52.819086   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:52.819408   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:52.819544   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:40:52.819646   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetState
	I0927 01:40:52.820921   69333 fix.go:112] recreateIfNeeded on old-k8s-version-612261: state=Stopped err=<nil>
	I0927 01:40:52.820956   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	W0927 01:40:52.821110   69333 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:52.823209   69333 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-612261" ...
	I0927 01:40:51.567691   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568205   69234 main.go:141] libmachine: (embed-certs-245911) Found IP for machine: 192.168.39.158
	I0927 01:40:51.568241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has current primary IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568250   69234 main.go:141] libmachine: (embed-certs-245911) Reserving static IP address...
	I0927 01:40:51.568731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.568764   69234 main.go:141] libmachine: (embed-certs-245911) DBG | skip adding static IP to network mk-embed-certs-245911 - found existing host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"}
	I0927 01:40:51.568781   69234 main.go:141] libmachine: (embed-certs-245911) Reserved static IP address: 192.168.39.158
	I0927 01:40:51.568798   69234 main.go:141] libmachine: (embed-certs-245911) Waiting for SSH to be available...
	I0927 01:40:51.568806   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Getting to WaitForSSH function...
	I0927 01:40:51.570819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571139   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.571167   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571321   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH client type: external
	I0927 01:40:51.571370   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa (-rw-------)
	I0927 01:40:51.571401   69234 main.go:141] libmachine: (embed-certs-245911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:40:51.571414   69234 main.go:141] libmachine: (embed-certs-245911) DBG | About to run SSH command:
	I0927 01:40:51.571422   69234 main.go:141] libmachine: (embed-certs-245911) DBG | exit 0
	I0927 01:40:51.691525   69234 main.go:141] libmachine: (embed-certs-245911) DBG | SSH cmd err, output: <nil>: 
	I0927 01:40:51.691953   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetConfigRaw
	I0927 01:40:51.692573   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:51.695121   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695541   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.695572   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695871   69234 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/config.json ...
	I0927 01:40:51.696087   69234 machine.go:93] provisionDockerMachine start ...
	I0927 01:40:51.696109   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:51.696312   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.698740   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699086   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.699112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699229   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.699415   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699552   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699679   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.699810   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.699998   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.700011   69234 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:40:51.799534   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:40:51.799559   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799764   69234 buildroot.go:166] provisioning hostname "embed-certs-245911"
	I0927 01:40:51.799792   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.802464   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.802844   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802960   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.803131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803290   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803502   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.803672   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.803868   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.803888   69234 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-245911 && echo "embed-certs-245911" | sudo tee /etc/hostname
	I0927 01:40:51.917988   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-245911
	
	I0927 01:40:51.918019   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.920484   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.920800   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.920831   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.921041   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.921224   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921383   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921511   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.921693   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.921883   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.921901   69234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-245911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-245911/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-245911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:40:52.028582   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:52.028609   69234 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:40:52.028672   69234 buildroot.go:174] setting up certificates
	I0927 01:40:52.028686   69234 provision.go:84] configureAuth start
	I0927 01:40:52.028704   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:52.029001   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.031742   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032088   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.032117   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032273   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.034392   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.034754   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034905   69234 provision.go:143] copyHostCerts
	I0927 01:40:52.034956   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:40:52.034969   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:40:52.035042   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:40:52.035172   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:40:52.035185   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:40:52.035224   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:40:52.035319   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:40:52.035329   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:40:52.035363   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:40:52.035433   69234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-245911 san=[127.0.0.1 192.168.39.158 embed-certs-245911 localhost minikube]
	I0927 01:40:52.206591   69234 provision.go:177] copyRemoteCerts
	I0927 01:40:52.206657   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:40:52.206724   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.209445   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209770   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.209792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209995   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.210234   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.210416   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.210578   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.290176   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:40:52.313645   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:40:52.336446   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:40:52.359182   69234 provision.go:87] duration metric: took 330.481958ms to configureAuth
	I0927 01:40:52.359214   69234 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:40:52.359464   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:40:52.359551   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.362163   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362488   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.362513   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362670   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.362826   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.362976   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.363133   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.363334   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.363532   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.363553   69234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:40:52.574326   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:40:52.574354   69234 machine.go:96] duration metric: took 878.253718ms to provisionDockerMachine
	I0927 01:40:52.574368   69234 start.go:293] postStartSetup for "embed-certs-245911" (driver="kvm2")
	I0927 01:40:52.574381   69234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:40:52.574398   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.574688   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:40:52.574714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.577727   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578035   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.578060   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578227   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.578411   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.578555   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.578735   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.658636   69234 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:40:52.663048   69234 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:40:52.663077   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:40:52.663147   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:40:52.663223   69234 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:40:52.663322   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:40:52.673347   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:52.697092   69234 start.go:296] duration metric: took 122.71069ms for postStartSetup
	I0927 01:40:52.697126   69234 fix.go:56] duration metric: took 20.212915975s for fixHost
	I0927 01:40:52.697145   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.699817   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700173   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.700202   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700364   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.700558   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700735   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700921   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.701097   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.701269   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.701285   69234 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:40:52.800080   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401252.775762391
	
	I0927 01:40:52.800102   69234 fix.go:216] guest clock: 1727401252.775762391
	I0927 01:40:52.800111   69234 fix.go:229] Guest: 2024-09-27 01:40:52.775762391 +0000 UTC Remote: 2024-09-27 01:40:52.697129165 +0000 UTC m=+207.179045808 (delta=78.633226ms)
	I0927 01:40:52.800145   69234 fix.go:200] guest clock delta is within tolerance: 78.633226ms
	I0927 01:40:52.800152   69234 start.go:83] releasing machines lock for "embed-certs-245911", held for 20.315972034s
	I0927 01:40:52.800183   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.800495   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.803196   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803657   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.803700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803874   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804419   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804610   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804731   69234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:40:52.804771   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.804813   69234 ssh_runner.go:195] Run: cat /version.json
	I0927 01:40:52.804837   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.807320   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807346   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807680   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807759   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807807   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807916   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808070   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808150   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808262   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808331   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808384   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808468   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.808522   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.908963   69234 ssh_runner.go:195] Run: systemctl --version
	I0927 01:40:52.915158   69234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:40:53.067605   69234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:40:53.074171   69234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:40:53.074241   69234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:40:53.091718   69234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:40:53.091742   69234 start.go:495] detecting cgroup driver to use...
	I0927 01:40:53.091813   69234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:40:53.108730   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:40:53.122920   69234 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:40:53.122984   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:40:53.137487   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:40:53.152420   69234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:40:53.269491   69234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:40:53.417893   69234 docker.go:233] disabling docker service ...
	I0927 01:40:53.417951   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:40:53.442201   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:40:53.459920   69234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:40:53.589768   69234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:40:53.719203   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:40:53.733145   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:40:53.751853   69234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:40:53.751919   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.763230   69234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:40:53.763294   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.774864   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.786149   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.797167   69234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:40:53.808495   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.819285   69234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.838497   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.850490   69234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:40:53.860309   69234 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:40:53.860377   69234 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:40:53.875533   69234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:40:53.885752   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:54.014352   69234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:40:54.107866   69234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:40:54.107926   69234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:40:54.113206   69234 start.go:563] Will wait 60s for crictl version
	I0927 01:40:54.113256   69234 ssh_runner.go:195] Run: which crictl
	I0927 01:40:54.117229   69234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:40:54.156365   69234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:40:54.156459   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.183974   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.214440   69234 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:40:54.215714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:54.218624   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.218975   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:54.219013   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.219180   69234 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 01:40:54.223450   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:54.236761   69234 kubeadm.go:883] updating cluster {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:40:54.236923   69234 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:40:54.236989   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:54.276635   69234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:40:54.276708   69234 ssh_runner.go:195] Run: which lz4
	I0927 01:40:54.281055   69234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:40:54.285439   69234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:40:54.285472   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:40:52.824650   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .Start
	I0927 01:40:52.824802   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring networks are active...
	I0927 01:40:52.825590   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network default is active
	I0927 01:40:52.825908   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network mk-old-k8s-version-612261 is active
	I0927 01:40:52.826326   69333 main.go:141] libmachine: (old-k8s-version-612261) Getting domain xml...
	I0927 01:40:52.827108   69333 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:40:54.071322   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting to get IP...
	I0927 01:40:54.072357   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.072756   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.072821   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.072738   70279 retry.go:31] will retry after 264.648837ms: waiting for machine to come up
	I0927 01:40:54.339366   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.339799   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.339827   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.339731   70279 retry.go:31] will retry after 343.432635ms: waiting for machine to come up
	I0927 01:40:54.685260   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.685746   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.685780   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.685714   70279 retry.go:31] will retry after 455.276623ms: waiting for machine to come up
	I0927 01:40:55.142206   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.142679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.142708   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.142637   70279 retry.go:31] will retry after 419.074502ms: waiting for machine to come up
	I0927 01:40:55.563324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.565342   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.565368   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.565287   70279 retry.go:31] will retry after 587.161471ms: waiting for machine to come up
	I0927 01:40:56.154584   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.155182   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.155220   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.155109   70279 retry.go:31] will retry after 782.426926ms: waiting for machine to come up
	I0927 01:40:56.938784   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.939201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.939228   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.939132   70279 retry.go:31] will retry after 781.231902ms: waiting for machine to come up
	I0927 01:40:55.723619   69234 crio.go:462] duration metric: took 1.442589436s to copy over tarball
	I0927 01:40:55.723705   69234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:40:57.775673   69234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051936146s)
	I0927 01:40:57.775697   69234 crio.go:469] duration metric: took 2.052045538s to extract the tarball
	I0927 01:40:57.775704   69234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:40:57.812769   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:57.853219   69234 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:40:57.853240   69234 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:40:57.853248   69234 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0927 01:40:57.853354   69234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-245911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:40:57.853495   69234 ssh_runner.go:195] Run: crio config
	I0927 01:40:57.908273   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:40:57.908301   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:40:57.908322   69234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:40:57.908356   69234 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-245911 NodeName:embed-certs-245911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:40:57.908542   69234 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-245911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:40:57.908613   69234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:40:57.918923   69234 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:40:57.919021   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:40:57.928576   69234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0927 01:40:57.945515   69234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:40:57.962239   69234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0927 01:40:57.979722   69234 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0927 01:40:57.983709   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:57.996181   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:58.119502   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:40:58.137022   69234 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911 for IP: 192.168.39.158
	I0927 01:40:58.137048   69234 certs.go:194] generating shared ca certs ...
	I0927 01:40:58.137068   69234 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:40:58.137250   69234 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:40:58.137312   69234 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:40:58.137324   69234 certs.go:256] generating profile certs ...
	I0927 01:40:58.137444   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/client.key
	I0927 01:40:58.137522   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key.e289c840
	I0927 01:40:58.137574   69234 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key
	I0927 01:40:58.137731   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:40:58.137774   69234 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:40:58.137787   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:40:58.137819   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:40:58.137850   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:40:58.137883   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:40:58.137928   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:58.138551   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:40:58.179399   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:40:58.211297   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:40:58.245549   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:40:58.276837   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0927 01:40:58.313750   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:40:58.338145   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:40:58.361373   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:40:58.384790   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:40:58.407617   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:40:58.430621   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:40:58.453382   69234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:40:58.470177   69234 ssh_runner.go:195] Run: openssl version
	I0927 01:40:58.476280   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:40:58.489039   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493726   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493780   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.499856   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:40:58.511032   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:40:58.521694   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.525991   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.526031   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.531619   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:40:58.542017   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:40:58.552591   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557047   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557086   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.562874   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:40:58.574052   69234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:40:58.578537   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:40:58.584323   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:40:58.590033   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:40:58.596013   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:40:58.601572   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:40:58.606980   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:40:58.612554   69234 kubeadm.go:392] StartCluster: {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:40:58.612648   69234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:40:58.612704   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.649228   69234 cri.go:89] found id: ""
	I0927 01:40:58.649306   69234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:40:58.661599   69234 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:40:58.661628   69234 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:40:58.661688   69234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:40:58.671907   69234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:40:58.672851   69234 kubeconfig.go:125] found "embed-certs-245911" server: "https://192.168.39.158:8443"
	I0927 01:40:58.674753   69234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:40:58.684614   69234 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.158
	I0927 01:40:58.684643   69234 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:40:58.684652   69234 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:40:58.684715   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.726714   69234 cri.go:89] found id: ""
	I0927 01:40:58.726816   69234 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:40:58.743675   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:40:58.753456   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:40:58.753485   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:40:58.753535   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:40:58.762724   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:40:58.762821   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:40:58.772558   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:40:58.781732   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:40:58.781790   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:40:58.791109   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.800066   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:40:58.800127   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.809338   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:40:58.818214   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:40:58.818260   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:40:58.828049   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:40:58.837606   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:58.942395   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.758951   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.966377   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.036702   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.126663   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:00.126743   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:40:57.722147   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:57.722637   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:57.722657   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:57.722593   70279 retry.go:31] will retry after 1.223133601s: waiting for machine to come up
	I0927 01:40:58.947836   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:58.948362   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:58.948388   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:58.948326   70279 retry.go:31] will retry after 1.155368003s: waiting for machine to come up
	I0927 01:41:00.105812   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:00.106288   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:00.106356   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:00.106280   70279 retry.go:31] will retry after 2.324904017s: waiting for machine to come up
	I0927 01:41:00.627542   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.126971   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.626940   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.127478   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.176746   69234 api_server.go:72] duration metric: took 2.050081672s to wait for apiserver process to appear ...
	I0927 01:41:02.176775   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:02.176798   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:02.177442   69234 api_server.go:269] stopped: https://192.168.39.158:8443/healthz: Get "https://192.168.39.158:8443/healthz": dial tcp 192.168.39.158:8443: connect: connection refused
	I0927 01:41:02.677488   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.824718   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.824748   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:04.824763   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.850790   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.850820   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:05.177167   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.201660   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.201696   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:02.432597   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:02.433066   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:02.433096   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:02.433026   70279 retry.go:31] will retry after 2.598889471s: waiting for machine to come up
	I0927 01:41:05.034614   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:05.035001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:05.035023   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:05.034973   70279 retry.go:31] will retry after 3.064943329s: waiting for machine to come up
	I0927 01:41:05.677514   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.683506   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.683543   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.177064   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.181304   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.181339   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.676872   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.681269   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.681297   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.176902   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.181397   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.181425   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.677457   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.682057   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.682087   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:08.177696   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:08.181752   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:41:08.188257   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:08.188278   69234 api_server.go:131] duration metric: took 6.011495616s to wait for apiserver health ...
	I0927 01:41:08.188285   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:41:08.188291   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:08.190206   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:08.191584   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:08.202370   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:08.224843   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:08.234247   69234 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:08.234275   69234 system_pods.go:61] "coredns-7c65d6cfc9-f2vxv" [3eed941e-e943-490b-a0a8-d543cec18a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:08.234284   69234 system_pods.go:61] "etcd-embed-certs-245911" [f88581ff-3747-4fe5-a4a2-6259c3b4554e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:08.234291   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [3f1efb25-6e30-4d5f-baba-3e98b6fe531e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:08.234298   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [a624fc8d-fbe3-4b63-8a88-5f8069b21095] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:08.234302   69234 system_pods.go:61] "kube-proxy-pjf8v" [a1b76e67-803a-43fe-bff6-a4b0ddc246a1] Running
	I0927 01:41:08.234309   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [0f7c146b-e2b7-4110-b010-f4599d0da410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:08.234313   69234 system_pods.go:61] "metrics-server-6867b74b74-k8mdf" [6d1e68fb-5187-4bc6-abdb-44f598e351c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:08.234317   69234 system_pods.go:61] "storage-provisioner" [dc0a7806-bee8-4127-8218-b2e48fa8500b] Running
	I0927 01:41:08.234323   69234 system_pods.go:74] duration metric: took 9.462578ms to wait for pod list to return data ...
	I0927 01:41:08.234333   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:08.238433   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:08.238455   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:08.238468   69234 node_conditions.go:105] duration metric: took 4.128775ms to run NodePressure ...
	I0927 01:41:08.238483   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:08.502161   69234 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506267   69234 kubeadm.go:739] kubelet initialised
	I0927 01:41:08.506290   69234 kubeadm.go:740] duration metric: took 4.099692ms waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506299   69234 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:08.510964   69234 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.515262   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515279   69234 pod_ready.go:82] duration metric: took 4.294632ms for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.515286   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515298   69234 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.519627   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519641   69234 pod_ready.go:82] duration metric: took 4.313975ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.519648   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519653   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.523152   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523165   69234 pod_ready.go:82] duration metric: took 3.50412ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.523177   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523186   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.628811   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628847   69234 pod_ready.go:82] duration metric: took 105.648464ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.628859   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628868   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027358   69234 pod_ready.go:93] pod "kube-proxy-pjf8v" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:09.027383   69234 pod_ready.go:82] duration metric: took 398.507928ms for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027393   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.101834   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:08.102324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:08.102358   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:08.102283   70279 retry.go:31] will retry after 4.242138543s: waiting for machine to come up
	I0927 01:41:13.708458   69534 start.go:364] duration metric: took 3m25.271525685s to acquireMachinesLock for "default-k8s-diff-port-368295"
	I0927 01:41:13.708525   69534 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:13.708533   69534 fix.go:54] fixHost starting: 
	I0927 01:41:13.708923   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:13.708979   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:13.726306   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0927 01:41:13.726732   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:13.727228   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:41:13.727252   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:13.727579   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:13.727781   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:13.727975   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:41:13.729621   69534 fix.go:112] recreateIfNeeded on default-k8s-diff-port-368295: state=Stopped err=<nil>
	I0927 01:41:13.729657   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	W0927 01:41:13.729826   69534 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:13.731730   69534 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-368295" ...
	I0927 01:41:12.347378   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.347831   69333 main.go:141] libmachine: (old-k8s-version-612261) Found IP for machine: 192.168.72.129
	I0927 01:41:12.347855   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserving static IP address...
	I0927 01:41:12.347872   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has current primary IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.348468   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.348494   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserved static IP address: 192.168.72.129
	I0927 01:41:12.348507   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | skip adding static IP to network mk-old-k8s-version-612261 - found existing host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"}
	I0927 01:41:12.348518   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting for SSH to be available...
	I0927 01:41:12.348537   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Getting to WaitForSSH function...
	I0927 01:41:12.350917   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351287   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.351335   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351464   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH client type: external
	I0927 01:41:12.351485   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa (-rw-------)
	I0927 01:41:12.351516   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:12.351525   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | About to run SSH command:
	I0927 01:41:12.351533   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | exit 0
	I0927 01:41:12.471347   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:12.471724   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:41:12.472352   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.474886   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475299   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.475340   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475628   69333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:41:12.475857   69333 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:12.475879   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:12.476115   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.478594   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.478918   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.478945   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.479126   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.479340   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479536   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479695   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.479859   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.480093   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.480116   69333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:12.579536   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:12.579562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579785   69333 buildroot.go:166] provisioning hostname "old-k8s-version-612261"
	I0927 01:41:12.579798   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579965   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.582679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.583027   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583166   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.583372   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583727   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.583924   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.584169   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.584187   69333 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-612261 && echo "old-k8s-version-612261" | sudo tee /etc/hostname
	I0927 01:41:12.702223   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-612261
	
	I0927 01:41:12.702252   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.705201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705564   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.705601   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705817   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.706012   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706154   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706344   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.706538   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.706720   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.706738   69333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-612261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-612261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-612261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:12.816316   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:12.816343   69333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:12.816376   69333 buildroot.go:174] setting up certificates
	I0927 01:41:12.816386   69333 provision.go:84] configureAuth start
	I0927 01:41:12.816394   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.816678   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.819190   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819487   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.819511   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.821843   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822166   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.822203   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822382   69333 provision.go:143] copyHostCerts
	I0927 01:41:12.822453   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:12.822466   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:12.822533   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:12.822641   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:12.822650   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:12.822682   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:12.822756   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:12.822766   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:12.822792   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:12.822859   69333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-612261 san=[127.0.0.1 192.168.72.129 localhost minikube old-k8s-version-612261]
	I0927 01:41:13.054632   69333 provision.go:177] copyRemoteCerts
	I0927 01:41:13.054706   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:13.054740   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.057895   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058296   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.058329   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058478   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.058696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.058907   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.059062   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.146378   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:13.176435   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:41:13.208974   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 01:41:13.240179   69333 provision.go:87] duration metric: took 423.77487ms to configureAuth
	I0927 01:41:13.240211   69333 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:13.240412   69333 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:41:13.240498   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.243514   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.243963   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.243991   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.244174   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.244419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244641   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244838   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.245039   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.245263   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.245284   69333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:13.476519   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:13.476545   69333 machine.go:96] duration metric: took 1.000674334s to provisionDockerMachine
	I0927 01:41:13.476558   69333 start.go:293] postStartSetup for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:41:13.476574   69333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:13.476593   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.476914   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:13.476942   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.479326   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479662   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.479686   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479835   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.480027   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.480182   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.480337   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.563321   69333 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:13.567844   69333 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:13.567867   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:13.567929   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:13.568012   69333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:13.568109   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:13.578453   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:13.603888   69333 start.go:296] duration metric: took 127.316429ms for postStartSetup
	I0927 01:41:13.603924   69333 fix.go:56] duration metric: took 20.803606957s for fixHost
	I0927 01:41:13.603948   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.606500   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.606921   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.606949   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.607189   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.607419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607600   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607726   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.608048   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.608234   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.608245   69333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:13.708261   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401273.683707076
	
	I0927 01:41:13.708284   69333 fix.go:216] guest clock: 1727401273.683707076
	I0927 01:41:13.708293   69333 fix.go:229] Guest: 2024-09-27 01:41:13.683707076 +0000 UTC Remote: 2024-09-27 01:41:13.603929237 +0000 UTC m=+226.291347697 (delta=79.777839ms)
	I0927 01:41:13.708348   69333 fix.go:200] guest clock delta is within tolerance: 79.777839ms
	I0927 01:41:13.708357   69333 start.go:83] releasing machines lock for "old-k8s-version-612261", held for 20.90807118s
	I0927 01:41:13.708392   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.708665   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:13.711474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.711873   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.711905   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.712035   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712569   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712748   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712832   69333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:13.712878   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.712949   69333 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:13.712971   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.715681   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.715820   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716024   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716043   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716200   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716225   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716235   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716370   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716487   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716548   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716622   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716728   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716779   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.716859   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.826638   69333 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:13.832901   69333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:13.986132   69333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:13.992644   69333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:13.992728   69333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:14.008962   69333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:14.008991   69333 start.go:495] detecting cgroup driver to use...
	I0927 01:41:14.009051   69333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:14.025047   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:14.040807   69333 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:14.040857   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:14.055972   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:14.072654   69333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:14.210869   69333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:14.403536   69333 docker.go:233] disabling docker service ...
	I0927 01:41:14.403596   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:14.421549   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:14.436288   69333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:14.569634   69333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:14.701517   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:14.716794   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:14.740622   69333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:41:14.740685   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.756563   69333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:14.756626   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.768952   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.781314   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.793578   69333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:14.806302   69333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:14.822967   69333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:14.823036   69333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:14.837673   69333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:14.848486   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:14.988181   69333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:15.100581   69333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:15.100664   69333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:15.105816   69333 start.go:563] Will wait 60s for crictl version
	I0927 01:41:15.105883   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:15.110375   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:15.154944   69333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:15.155039   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.188172   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.220410   69333 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:41:11.033747   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:13.038930   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:15.035610   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:15.035636   69234 pod_ready.go:82] duration metric: took 6.008237321s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.035645   69234 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.221508   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:15.224474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.224855   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:15.224884   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.225126   69333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:15.229555   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:15.244862   69333 kubeadm.go:883] updating cluster {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:15.245007   69333 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:41:15.245070   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:15.298422   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:15.298501   69333 ssh_runner.go:195] Run: which lz4
	I0927 01:41:15.302771   69333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:15.307360   69333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:15.307398   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:41:17.053272   69333 crio.go:462] duration metric: took 1.750548806s to copy over tarball
	I0927 01:41:17.053354   69333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:13.732810   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Start
	I0927 01:41:13.732979   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring networks are active...
	I0927 01:41:13.733749   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network default is active
	I0927 01:41:13.734076   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network mk-default-k8s-diff-port-368295 is active
	I0927 01:41:13.734425   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Getting domain xml...
	I0927 01:41:13.734997   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Creating domain...
	I0927 01:41:15.073415   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting to get IP...
	I0927 01:41:15.074278   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074774   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074850   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.074757   70444 retry.go:31] will retry after 231.356774ms: waiting for machine to come up
	I0927 01:41:15.308474   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309058   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.308989   70444 retry.go:31] will retry after 252.762152ms: waiting for machine to come up
	I0927 01:41:15.563638   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564173   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564212   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.564130   70444 retry.go:31] will retry after 341.067908ms: waiting for machine to come up
	I0927 01:41:15.906735   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907138   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907168   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.907091   70444 retry.go:31] will retry after 385.816363ms: waiting for machine to come up
	I0927 01:41:16.294523   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295246   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295268   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.295192   70444 retry.go:31] will retry after 575.812339ms: waiting for machine to come up
	I0927 01:41:16.873050   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873574   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873601   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.873520   70444 retry.go:31] will retry after 661.914855ms: waiting for machine to come up
	I0927 01:41:17.537039   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537516   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537544   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:17.537467   70444 retry.go:31] will retry after 959.195147ms: waiting for machine to come up
	I0927 01:41:17.043983   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.543159   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:20.066231   69333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012846531s)
	I0927 01:41:20.066257   69333 crio.go:469] duration metric: took 3.012954388s to extract the tarball
	I0927 01:41:20.066265   69333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:20.112486   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:20.152620   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:20.152647   69333 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:20.152723   69333 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.152754   69333 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.152789   69333 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.152813   69333 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.152816   69333 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.152763   69333 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.152938   69333 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:41:20.152940   69333 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154747   69333 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.154752   69333 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.154886   69333 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.154925   69333 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.154930   69333 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.154934   69333 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:41:20.316172   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.316352   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:41:20.319986   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.331224   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.342010   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.355732   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.355739   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.446420   69333 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:41:20.446477   69333 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.446529   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.469134   69333 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:41:20.469183   69333 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.469231   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.470229   69333 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:41:20.470264   69333 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:41:20.470310   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.477952   69333 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:41:20.477991   69333 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.478034   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.519340   69333 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:41:20.519391   69333 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.519454   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538237   69333 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:41:20.538256   69333 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:41:20.538293   69333 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.538298   69333 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.538389   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.538438   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.538489   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656448   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.656508   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.656542   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.656573   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.656635   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.656704   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656740   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.818479   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.818494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.818581   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.878325   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:41:20.878480   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.878494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.878585   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.885061   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.885168   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.898628   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:41:20.994147   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:41:20.994175   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:41:20.994211   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:21.016210   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:41:21.016289   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:41:21.035051   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:41:21.374949   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:21.520726   69333 cache_images.go:92] duration metric: took 1.368058485s to LoadCachedImages
	W0927 01:41:21.520817   69333 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0927 01:41:21.520833   69333 kubeadm.go:934] updating node { 192.168.72.129 8443 v1.20.0 crio true true} ...
	I0927 01:41:21.520951   69333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-612261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:21.521035   69333 ssh_runner.go:195] Run: crio config
	I0927 01:41:21.571651   69333 cni.go:84] Creating CNI manager for ""
	I0927 01:41:21.571677   69333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:21.571688   69333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:21.571712   69333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-612261 NodeName:old-k8s-version-612261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:41:21.571882   69333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-612261"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:21.571958   69333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:41:21.582735   69333 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:21.582802   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:21.593329   69333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0927 01:41:21.615040   69333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:21.636564   69333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 01:41:21.657275   69333 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:21.661675   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:21.674587   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:21.814300   69333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:21.834133   69333 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261 for IP: 192.168.72.129
	I0927 01:41:21.834163   69333 certs.go:194] generating shared ca certs ...
	I0927 01:41:21.834182   69333 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:21.834380   69333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:21.834437   69333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:21.834450   69333 certs.go:256] generating profile certs ...
	I0927 01:41:21.834558   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key
	I0927 01:41:21.834630   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e
	I0927 01:41:21.834676   69333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key
	I0927 01:41:21.834819   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:21.834859   69333 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:21.834873   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:21.834904   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:21.834937   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:21.834973   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:21.835023   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:21.835864   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:21.866955   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:21.902991   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:21.928957   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:21.957505   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:41:21.984055   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:22.013191   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:22.041745   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:41:22.069680   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:22.104139   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:22.130348   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:22.157976   69333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:22.177818   69333 ssh_runner.go:195] Run: openssl version
	I0927 01:41:22.184389   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:22.196133   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201047   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201120   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.207245   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:22.219033   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:22.230331   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235000   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235054   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.240963   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:22.252022   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:22.263197   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268023   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268100   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.274086   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:22.285387   69333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:22.290487   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:22.296953   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:22.303095   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:22.310001   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:22.316346   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:22.322559   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:22.328931   69333 kubeadm.go:392] StartCluster: {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:22.329015   69333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:22.329081   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:18.498695   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499234   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:18.499187   70444 retry.go:31] will retry after 932.004828ms: waiting for machine to come up
	I0927 01:41:19.432487   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432885   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432912   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:19.432844   70444 retry.go:31] will retry after 1.595543978s: waiting for machine to come up
	I0927 01:41:21.030048   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030572   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030598   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:21.030526   70444 retry.go:31] will retry after 1.93010855s: waiting for machine to come up
	I0927 01:41:22.963833   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964303   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964334   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:22.964254   70444 retry.go:31] will retry after 2.81720725s: waiting for machine to come up
	I0927 01:41:21.757497   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:24.043965   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:22.368989   69333 cri.go:89] found id: ""
	I0927 01:41:22.369059   69333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:22.379818   69333 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:22.379841   69333 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:22.379897   69333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:22.392278   69333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:22.393236   69333 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:41:22.393856   69333 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-14935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-612261" cluster setting kubeconfig missing "old-k8s-version-612261" context setting]
	I0927 01:41:22.394733   69333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:22.404625   69333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:22.415376   69333 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.129
	I0927 01:41:22.415414   69333 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:22.415427   69333 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:22.415487   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:22.452749   69333 cri.go:89] found id: ""
	I0927 01:41:22.452829   69333 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:22.469164   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:22.480018   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:22.480038   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:22.480092   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:41:22.490501   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:22.490562   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:22.500330   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:41:22.509612   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:22.509681   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:22.520064   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.529864   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:22.529921   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.540563   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:41:22.556739   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:22.556797   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:22.572858   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:22.583366   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:22.709007   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.468461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.714890   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.865174   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.959048   69333 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:23.959140   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.460104   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.959462   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.460143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.959473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.460051   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.960121   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.784030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784429   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784456   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:25.784393   70444 retry.go:31] will retry after 2.844872797s: waiting for machine to come up
	I0927 01:41:26.544176   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:29.042297   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:27.459491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:27.959944   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.459636   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.959766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.459410   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.959439   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.460176   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.959810   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.459492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.959966   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.632445   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632905   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632930   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:28.632866   70444 retry.go:31] will retry after 3.566248996s: waiting for machine to come up
	I0927 01:41:32.200424   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200804   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Found IP for machine: 192.168.61.83
	I0927 01:41:32.200832   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has current primary IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200841   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserving static IP address...
	I0927 01:41:32.201137   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.201151   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserved static IP address: 192.168.61.83
	I0927 01:41:32.201164   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | skip adding static IP to network mk-default-k8s-diff-port-368295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"}
	I0927 01:41:32.201177   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Getting to WaitForSSH function...
	I0927 01:41:32.201185   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for SSH to be available...
	I0927 01:41:32.203258   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203542   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.203571   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203674   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH client type: external
	I0927 01:41:32.203704   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa (-rw-------)
	I0927 01:41:32.203743   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:32.203763   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | About to run SSH command:
	I0927 01:41:32.203783   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | exit 0
	I0927 01:41:32.327131   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:32.327499   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetConfigRaw
	I0927 01:41:32.328140   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.330387   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.330769   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.330801   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.331054   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:41:32.331257   69534 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:32.331279   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:32.331505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.333514   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333799   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.333825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333940   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.334101   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334267   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334359   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.334509   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.334700   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.334709   69534 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:32.439884   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:32.439921   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440126   69534 buildroot.go:166] provisioning hostname "default-k8s-diff-port-368295"
	I0927 01:41:32.440149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440346   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.443385   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443707   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.443742   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443917   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.444093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444266   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444427   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.444606   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.444793   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.444809   69534 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-368295 && echo "default-k8s-diff-port-368295" | sudo tee /etc/hostname
	I0927 01:41:32.570447   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-368295
	
	I0927 01:41:32.570479   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.573194   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573472   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.573512   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573699   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.573942   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574097   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.574430   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.574623   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.574647   69534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-368295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-368295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-368295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:32.693082   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:32.693107   69534 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:32.693140   69534 buildroot.go:174] setting up certificates
	I0927 01:41:32.693149   69534 provision.go:84] configureAuth start
	I0927 01:41:32.693160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.693407   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.696156   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696498   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.696522   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696693   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.698894   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699229   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.699257   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699399   69534 provision.go:143] copyHostCerts
	I0927 01:41:32.699451   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:32.699464   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:32.699530   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:32.699639   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:32.699653   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:32.699681   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:32.699751   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:32.699761   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:32.699785   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:32.699848   69534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-368295 san=[127.0.0.1 192.168.61.83 default-k8s-diff-port-368295 localhost minikube]
	I0927 01:41:32.887727   69534 provision.go:177] copyRemoteCerts
	I0927 01:41:32.887792   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:32.887825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.890435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890768   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.890797   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890956   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.891128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.891252   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.891373   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:32.973705   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:32.998434   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0927 01:41:33.023552   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:33.048884   69534 provision.go:87] duration metric: took 355.724209ms to configureAuth
	I0927 01:41:33.048910   69534 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:33.049080   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:33.049149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.051738   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052080   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.052133   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052364   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.052578   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052726   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.053031   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.053265   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.053283   69534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:33.292126   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:33.292148   69534 machine.go:96] duration metric: took 960.878234ms to provisionDockerMachine
	I0927 01:41:33.292159   69534 start.go:293] postStartSetup for "default-k8s-diff-port-368295" (driver="kvm2")
	I0927 01:41:33.292171   69534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:33.292188   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.292511   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:33.292539   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.295356   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.295759   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295936   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.296100   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.296314   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.296498   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.528391   68676 start.go:364] duration metric: took 56.042651871s to acquireMachinesLock for "no-preload-521072"
	I0927 01:41:33.528435   68676 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:33.528445   68676 fix.go:54] fixHost starting: 
	I0927 01:41:33.528858   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:33.528890   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:33.547391   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0927 01:41:33.547852   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:33.548343   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:41:33.548371   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:33.548713   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:33.548907   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:33.549064   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:41:33.550898   68676 fix.go:112] recreateIfNeeded on no-preload-521072: state=Stopped err=<nil>
	I0927 01:41:33.550923   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	W0927 01:41:33.551084   68676 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:33.553090   68676 out.go:177] * Restarting existing kvm2 VM for "no-preload-521072" ...
	I0927 01:41:33.554429   68676 main.go:141] libmachine: (no-preload-521072) Calling .Start
	I0927 01:41:33.554613   68676 main.go:141] libmachine: (no-preload-521072) Ensuring networks are active...
	I0927 01:41:33.555401   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network default is active
	I0927 01:41:33.555858   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network mk-no-preload-521072 is active
	I0927 01:41:33.556350   68676 main.go:141] libmachine: (no-preload-521072) Getting domain xml...
	I0927 01:41:33.557057   68676 main.go:141] libmachine: (no-preload-521072) Creating domain...
	I0927 01:41:34.830052   68676 main.go:141] libmachine: (no-preload-521072) Waiting to get IP...
	I0927 01:41:34.830807   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:34.831255   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:34.831340   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:34.831244   70637 retry.go:31] will retry after 267.615794ms: waiting for machine to come up
	I0927 01:41:33.378613   69534 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:33.383491   69534 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:33.383517   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:33.383590   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:33.383695   69534 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:33.383810   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:33.395134   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:33.420441   69534 start.go:296] duration metric: took 128.270045ms for postStartSetup
	I0927 01:41:33.420481   69534 fix.go:56] duration metric: took 19.711948387s for fixHost
	I0927 01:41:33.420505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.422860   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423170   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.423198   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423333   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.423517   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423676   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.423987   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.424139   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.424153   69534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:33.528250   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401293.484458762
	
	I0927 01:41:33.528271   69534 fix.go:216] guest clock: 1727401293.484458762
	I0927 01:41:33.528278   69534 fix.go:229] Guest: 2024-09-27 01:41:33.484458762 +0000 UTC Remote: 2024-09-27 01:41:33.420486926 +0000 UTC m=+225.118319167 (delta=63.971836ms)
	I0927 01:41:33.528297   69534 fix.go:200] guest clock delta is within tolerance: 63.971836ms
	I0927 01:41:33.528303   69534 start.go:83] releasing machines lock for "default-k8s-diff-port-368295", held for 19.819799777s
	I0927 01:41:33.528328   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.528623   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:33.531282   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531692   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.531724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531914   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532476   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532651   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532742   69534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:33.532784   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.532868   69534 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:33.532890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.535432   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535710   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.535843   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.536153   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536195   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536351   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536367   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536513   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536508   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.536634   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536815   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.644679   69534 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:33.652386   69534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:33.803821   69534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:33.810620   69534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:33.810678   69534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:33.826938   69534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:33.826963   69534 start.go:495] detecting cgroup driver to use...
	I0927 01:41:33.827028   69534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:33.844572   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:33.859851   69534 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:33.859916   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:33.874262   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:33.888460   69534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:34.011008   69534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:34.161761   69534 docker.go:233] disabling docker service ...
	I0927 01:41:34.161855   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:34.180621   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:34.198472   69534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:34.340892   69534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:34.483708   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:34.498745   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:34.518957   69534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:34.519026   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.530123   69534 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:34.530172   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.545035   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.555944   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.566852   69534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:34.577676   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.589078   69534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.608131   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.619482   69534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:34.629119   69534 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:34.629180   69534 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:34.643997   69534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:34.656396   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:34.791856   69534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:34.884774   69534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:34.884831   69534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:34.889590   69534 start.go:563] Will wait 60s for crictl version
	I0927 01:41:34.889633   69534 ssh_runner.go:195] Run: which crictl
	I0927 01:41:34.893330   69534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:34.930031   69534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:34.930141   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.960912   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.996060   69534 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:31.542525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.546389   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:32.459727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:32.959527   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.459351   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.959903   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.459444   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.959423   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.459435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.460148   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.959874   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.997457   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:35.000691   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001081   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:35.001127   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001322   69534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:35.006115   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:35.019817   69534 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:35.019983   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:35.020045   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:35.062533   69534 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:35.062595   69534 ssh_runner.go:195] Run: which lz4
	I0927 01:41:35.066897   69534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:35.071178   69534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:35.071216   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:41:36.563774   69534 crio.go:462] duration metric: took 1.496913722s to copy over tarball
	I0927 01:41:36.563866   69534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:35.100818   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.101327   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.101354   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.101290   70637 retry.go:31] will retry after 244.193758ms: waiting for machine to come up
	I0927 01:41:35.347021   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.347674   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.347714   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.347650   70637 retry.go:31] will retry after 361.672884ms: waiting for machine to come up
	I0927 01:41:35.711206   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.711755   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.711788   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.711730   70637 retry.go:31] will retry after 406.084841ms: waiting for machine to come up
	I0927 01:41:36.119494   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.120026   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.120067   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.119978   70637 retry.go:31] will retry after 497.966133ms: waiting for machine to come up
	I0927 01:41:36.619859   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.620400   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.620428   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.620362   70637 retry.go:31] will retry after 765.975603ms: waiting for machine to come up
	I0927 01:41:37.387821   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:37.388502   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:37.388537   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:37.388453   70637 retry.go:31] will retry after 828.567445ms: waiting for machine to come up
	I0927 01:41:38.218462   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:38.218940   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:38.218974   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:38.218803   70637 retry.go:31] will retry after 1.269155563s: waiting for machine to come up
	I0927 01:41:39.489076   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:39.489557   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:39.489583   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:39.489514   70637 retry.go:31] will retry after 1.666481574s: waiting for machine to come up
	I0927 01:41:35.554859   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:38.043285   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:40.542499   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:37.459766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:37.959594   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.459971   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.960093   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.459983   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.959812   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.460220   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.959253   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.459829   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.959864   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.667451   69534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10354947s)
	I0927 01:41:38.667477   69534 crio.go:469] duration metric: took 2.103669113s to extract the tarball
	I0927 01:41:38.667487   69534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:38.704217   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:38.747162   69534 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:41:38.747187   69534 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:41:38.747197   69534 kubeadm.go:934] updating node { 192.168.61.83 8444 v1.31.1 crio true true} ...
	I0927 01:41:38.747323   69534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-368295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:38.747406   69534 ssh_runner.go:195] Run: crio config
	I0927 01:41:38.796481   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:38.796510   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:38.796522   69534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:38.796549   69534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-368295 NodeName:default-k8s-diff-port-368295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:41:38.796726   69534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-368295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:38.796806   69534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:41:38.807445   69534 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:38.807513   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:38.817368   69534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0927 01:41:38.834181   69534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:38.851650   69534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0927 01:41:38.869822   69534 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:38.873868   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:38.886422   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:39.022075   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:39.038948   69534 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295 for IP: 192.168.61.83
	I0927 01:41:39.038982   69534 certs.go:194] generating shared ca certs ...
	I0927 01:41:39.039004   69534 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:39.039174   69534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:39.039241   69534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:39.039253   69534 certs.go:256] generating profile certs ...
	I0927 01:41:39.039402   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.key
	I0927 01:41:39.039490   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key.2edc0267
	I0927 01:41:39.039549   69534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key
	I0927 01:41:39.039701   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:39.039773   69534 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:39.039789   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:39.039825   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:39.039860   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:39.039889   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:39.039950   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:39.040814   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:39.080130   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:39.133365   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:39.169238   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:39.196619   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 01:41:39.227667   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:39.255240   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:39.280602   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:41:39.305695   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:39.329559   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:39.358555   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:39.387030   69534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:39.404111   69534 ssh_runner.go:195] Run: openssl version
	I0927 01:41:39.409879   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:39.420542   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425094   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425151   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.431225   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:39.442237   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:39.453229   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458040   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458110   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.464177   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:39.475582   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:39.486911   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491843   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491898   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.497653   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:39.508039   69534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:39.512597   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:39.518557   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:39.524475   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:39.530616   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:39.536820   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:39.543487   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:39.549791   69534 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:39.549880   69534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:39.549945   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.594178   69534 cri.go:89] found id: ""
	I0927 01:41:39.594256   69534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:39.605173   69534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:39.605195   69534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:39.605261   69534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:39.615543   69534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:39.616639   69534 kubeconfig.go:125] found "default-k8s-diff-port-368295" server: "https://192.168.61.83:8444"
	I0927 01:41:39.618793   69534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:39.628422   69534 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.83
	I0927 01:41:39.628454   69534 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:39.628465   69534 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:39.628566   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.673513   69534 cri.go:89] found id: ""
	I0927 01:41:39.673592   69534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:39.690296   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:39.699800   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:39.699821   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:39.699876   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:41:39.709235   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:39.709294   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:39.719012   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:41:39.728197   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:39.728262   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:39.737520   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.746592   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:39.746653   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.756251   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:41:39.765026   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:39.765090   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:39.774937   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:39.784588   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:39.893259   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.625162   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.954926   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.025693   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.101915   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:41.102006   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.602856   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.102942   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.602371   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.620056   69534 api_server.go:72] duration metric: took 1.518136259s to wait for apiserver process to appear ...
	I0927 01:41:42.620085   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:42.620107   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:41.157254   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:41.157789   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:41.157817   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:41.157738   70637 retry.go:31] will retry after 1.495421187s: waiting for machine to come up
	I0927 01:41:42.655326   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:42.655826   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:42.655853   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:42.655771   70637 retry.go:31] will retry after 2.80191937s: waiting for machine to come up
	I0927 01:41:42.543732   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.043009   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.040496   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.040525   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.040542   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.079569   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.079602   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.120702   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.126461   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.126488   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.621130   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.629533   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:45.629569   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.121189   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.130806   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:46.130842   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.620334   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.625456   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:41:46.636549   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:46.636581   69534 api_server.go:131] duration metric: took 4.016488114s to wait for apiserver health ...
	I0927 01:41:46.636591   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:46.636599   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:46.638016   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:42.459806   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.960200   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.459511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.959467   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.459352   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.960147   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.459637   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.959535   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.459585   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.959579   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.639222   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:46.651680   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:46.671366   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:46.684702   69534 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:46.684740   69534 system_pods.go:61] "coredns-7c65d6cfc9-xtgdx" [6a5f97bd-0fbb-4220-a763-bb8ca6fab439] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:46.684752   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [2dbd4866-89f2-4a0c-ab8a-671ff0237bf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:46.684761   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [62865280-e996-45a9-a872-766e09d5b91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:46.684774   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [b0d06bec-2f5a-46e4-9d2d-b2ea7cdc7968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:46.684781   69534 system_pods.go:61] "kube-proxy-xm2p8" [449495d5-a476-4abf-b6be-301b9ead92e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 01:41:46.684793   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [71dadb93-c535-4ce3-8dd7-ffd4496bf0e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:46.684801   69534 system_pods.go:61] "metrics-server-6867b74b74-n9nsg" [fefb6977-44af-41f8-8a82-1dcd76374ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:46.684811   69534 system_pods.go:61] "storage-provisioner" [78bd924c-1d70-4eb6-9e2c-0e21ebc523dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 01:41:46.684818   69534 system_pods.go:74] duration metric: took 13.431978ms to wait for pod list to return data ...
	I0927 01:41:46.684830   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:46.690309   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:46.690343   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:46.690358   69534 node_conditions.go:105] duration metric: took 5.522911ms to run NodePressure ...
	I0927 01:41:46.690379   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:46.964511   69534 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971731   69534 kubeadm.go:739] kubelet initialised
	I0927 01:41:46.971751   69534 kubeadm.go:740] duration metric: took 7.215476ms waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971760   69534 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:46.978192   69534 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:45.459706   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:45.460242   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:45.460265   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:45.460161   70637 retry.go:31] will retry after 3.051133432s: waiting for machine to come up
	I0927 01:41:48.512758   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:48.513180   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:48.513208   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:48.513118   70637 retry.go:31] will retry after 3.478053984s: waiting for machine to come up
	I0927 01:41:47.544064   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:50.042360   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.459645   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:47.959756   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.460088   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.959526   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.960102   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.460203   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.960225   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.460182   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.959343   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.985840   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:51.506449   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.484646   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:52.484672   69534 pod_ready.go:82] duration metric: took 5.506454681s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:52.484685   69534 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:51.994746   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995201   68676 main.go:141] libmachine: (no-preload-521072) Found IP for machine: 192.168.50.246
	I0927 01:41:51.995219   68676 main.go:141] libmachine: (no-preload-521072) Reserving static IP address...
	I0927 01:41:51.995230   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has current primary IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.995677   68676 main.go:141] libmachine: (no-preload-521072) Reserved static IP address: 192.168.50.246
	I0927 01:41:51.995695   68676 main.go:141] libmachine: (no-preload-521072) DBG | skip adding static IP to network mk-no-preload-521072 - found existing host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"}
	I0927 01:41:51.995713   68676 main.go:141] libmachine: (no-preload-521072) DBG | Getting to WaitForSSH function...
	I0927 01:41:51.995727   68676 main.go:141] libmachine: (no-preload-521072) Waiting for SSH to be available...
	I0927 01:41:51.998245   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998590   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.998616   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998748   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH client type: external
	I0927 01:41:51.998810   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa (-rw-------)
	I0927 01:41:51.998850   68676 main.go:141] libmachine: (no-preload-521072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:51.998866   68676 main.go:141] libmachine: (no-preload-521072) DBG | About to run SSH command:
	I0927 01:41:51.998877   68676 main.go:141] libmachine: (no-preload-521072) DBG | exit 0
	I0927 01:41:52.131754   68676 main.go:141] libmachine: (no-preload-521072) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:52.132117   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetConfigRaw
	I0927 01:41:52.132724   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.135236   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135588   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.135615   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135866   68676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/config.json ...
	I0927 01:41:52.136059   68676 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:52.136078   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:52.136300   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.138644   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139009   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.139035   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139215   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.139406   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139602   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139760   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.139931   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.140139   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.140151   68676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:52.255655   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:52.255690   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.255952   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:41:52.255968   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.256122   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.258599   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.258963   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.258994   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.259108   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.259322   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259494   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259676   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.259835   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.260008   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.260023   68676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-521072 && echo "no-preload-521072" | sudo tee /etc/hostname
	I0927 01:41:52.405255   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-521072
	
	I0927 01:41:52.405314   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.408593   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.408927   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.408973   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.409346   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.409591   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409786   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409940   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.410094   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.410331   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.410356   68676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521072/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:52.538244   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:52.538276   68676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:52.538321   68676 buildroot.go:174] setting up certificates
	I0927 01:41:52.538335   68676 provision.go:84] configureAuth start
	I0927 01:41:52.538350   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.538644   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.541913   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542334   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.542372   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542540   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.544773   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545127   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.545163   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545357   68676 provision.go:143] copyHostCerts
	I0927 01:41:52.545415   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:52.545427   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:52.545496   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:52.545614   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:52.545624   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:52.545655   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:52.545732   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:52.545742   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:52.545768   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:52.545834   68676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.no-preload-521072 san=[127.0.0.1 192.168.50.246 localhost minikube no-preload-521072]
	I0927 01:41:52.738375   68676 provision.go:177] copyRemoteCerts
	I0927 01:41:52.738434   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:52.738459   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.741146   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741439   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.741456   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741630   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.741828   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.741961   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.742086   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:52.830330   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:52.854664   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:41:52.879246   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:52.902734   68676 provision.go:87] duration metric: took 364.385528ms to configureAuth
	I0927 01:41:52.902782   68676 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:52.903017   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:52.903109   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.906143   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906495   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.906526   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906699   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.906917   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907086   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907211   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.907426   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.907625   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.907640   68676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:53.162936   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:53.162960   68676 machine.go:96] duration metric: took 1.026891152s to provisionDockerMachine
	I0927 01:41:53.162971   68676 start.go:293] postStartSetup for "no-preload-521072" (driver="kvm2")
	I0927 01:41:53.162980   68676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:53.162994   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.163325   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:53.163360   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.166007   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166478   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.166516   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166726   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.166919   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.167103   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.167253   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.254620   68676 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:53.259139   68676 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:53.259160   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:53.259236   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:53.259341   68676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:53.259465   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:53.269711   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:53.294563   68676 start.go:296] duration metric: took 131.58032ms for postStartSetup
	I0927 01:41:53.294602   68676 fix.go:56] duration metric: took 19.766156729s for fixHost
	I0927 01:41:53.294626   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.297597   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.297897   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.297928   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.298092   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.298275   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298632   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.298821   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:53.298997   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:53.299010   68676 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:53.416459   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401313.370238189
	
	I0927 01:41:53.416488   68676 fix.go:216] guest clock: 1727401313.370238189
	I0927 01:41:53.416497   68676 fix.go:229] Guest: 2024-09-27 01:41:53.370238189 +0000 UTC Remote: 2024-09-27 01:41:53.294607439 +0000 UTC m=+358.400757430 (delta=75.63075ms)
	I0927 01:41:53.416521   68676 fix.go:200] guest clock delta is within tolerance: 75.63075ms
	I0927 01:41:53.416542   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 19.888127741s
	I0927 01:41:53.416581   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.416835   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:53.419800   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420124   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.420153   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420309   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420730   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420905   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420988   68676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:53.421036   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.421126   68676 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:53.421148   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.423529   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423882   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423916   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.423937   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424023   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424308   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424365   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.424412   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424464   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.424567   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424701   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424838   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424990   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.527586   68676 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:53.533685   68676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:53.680850   68676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:53.686769   68676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:53.686831   68676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:53.702686   68676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:53.702709   68676 start.go:495] detecting cgroup driver to use...
	I0927 01:41:53.702787   68676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:53.720756   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:53.736843   68676 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:53.736920   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:53.752063   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:53.768140   68676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:53.890040   68676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:54.044033   68676 docker.go:233] disabling docker service ...
	I0927 01:41:54.044100   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:54.060061   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:54.073201   68676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:54.225559   68676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:54.367269   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:54.381517   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:54.401099   68676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:54.401164   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.412620   68676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:54.412687   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.425942   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.437451   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.449115   68676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:54.460383   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.471393   68676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.489649   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.500699   68676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:54.511012   68676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:54.511061   68676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:54.524738   68676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:54.535353   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:54.672416   68676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:54.763423   68676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:54.763506   68676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:54.768758   68676 start.go:563] Will wait 60s for crictl version
	I0927 01:41:54.768823   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:54.772980   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:54.814375   68676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:54.814460   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.844002   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.876692   68676 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:54.877765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:54.880320   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.880817   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:54.880852   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.881008   68676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:54.885225   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:54.897661   68676 kubeadm.go:883] updating cluster {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:54.897768   68676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:54.897810   68676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:52.542326   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.543472   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.459589   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:52.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.459448   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.960120   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.460016   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.959681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.959819   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.459221   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.959296   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.491390   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.997932   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.937979   68676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:54.938000   68676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:54.938055   68676 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.938103   68676 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.938124   68676 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.938101   68676 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.938180   68676 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:54.938069   68676 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939611   68676 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.939853   68676 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.939867   68676 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.939872   68676 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.939875   68676 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.939868   68676 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939932   68676 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0927 01:41:54.939954   68676 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.100149   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.104432   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.122220   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0927 01:41:55.146745   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.148808   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.159749   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.194662   68676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0927 01:41:55.194710   68676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.194764   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.218262   68676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0927 01:41:55.218302   68676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.218348   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.275530   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339428   68676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0927 01:41:55.339476   68676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.339488   68676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0927 01:41:55.339526   68676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.339554   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339558   68676 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0927 01:41:55.339569   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339573   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.339584   68676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.339619   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339625   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.339689   68676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0927 01:41:55.339733   68676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339772   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.392986   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.393033   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.403596   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.403658   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.403601   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.404180   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.528983   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.529008   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.529013   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.556122   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.556146   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.559222   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.668914   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0927 01:41:55.669041   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.671951   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.672026   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.675810   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0927 01:41:55.675854   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.675883   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.675910   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:55.687199   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0927 01:41:55.687234   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.687294   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.766777   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0927 01:41:55.766775   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0927 01:41:55.766894   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:41:55.766901   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:41:55.776811   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0927 01:41:55.776824   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0927 01:41:55.776933   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0927 01:41:55.777033   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:55.776938   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:41:56.125882   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825382   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.048325373s)
	I0927 01:41:57.825460   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0927 01:41:57.825396   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.048309349s)
	I0927 01:41:57.825483   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0927 01:41:57.825401   68676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.699485021s)
	I0927 01:41:57.825517   68676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0927 01:41:57.825520   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.138185505s)
	I0927 01:41:57.825540   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0927 01:41:57.825548   68676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825411   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.058505151s)
	I0927 01:41:57.825566   68676 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:57.825573   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0927 01:41:57.825414   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.058497946s)
	I0927 01:41:57.825584   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0927 01:41:57.825596   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:57.825613   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:59.788391   68676 ssh_runner.go:235] Completed: which crictl: (1.962775321s)
	I0927 01:41:59.788412   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.962779963s)
	I0927 01:41:59.788429   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0927 01:41:59.788457   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:59.788462   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:59.788499   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:57.043267   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.542589   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:57.459172   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:57.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.459323   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.960219   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.459916   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.959858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.460249   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.959246   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.459839   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.959224   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.490443   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.992727   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.992753   69534 pod_ready.go:82] duration metric: took 7.508057707s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.992766   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998326   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.998357   69534 pod_ready.go:82] duration metric: took 5.584215ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998372   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003176   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.003197   69534 pod_ready.go:82] duration metric: took 4.816939ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003209   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009089   69534 pod_ready.go:93] pod "kube-proxy-xm2p8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.009110   69534 pod_ready.go:82] duration metric: took 5.893939ms for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009119   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014172   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.014197   69534 pod_ready.go:82] duration metric: took 5.072107ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014209   69534 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:02.021372   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.758278   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969794291s)
	I0927 01:42:01.758369   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:01.758392   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.969869427s)
	I0927 01:42:01.758415   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0927 01:42:01.758445   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.758494   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.796910   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:03.934871   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.176354046s)
	I0927 01:42:03.934903   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0927 01:42:03.934921   68676 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.934927   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.137986898s)
	I0927 01:42:03.934972   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 01:42:03.934994   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.935050   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:03.939942   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0927 01:42:02.042617   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:04.042848   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:02.460232   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:02.959635   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.459610   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.959412   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.459857   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.959495   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.459972   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.959931   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.459460   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.959627   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.021759   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:06.521921   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.308972   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373952677s)
	I0927 01:42:07.308999   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0927 01:42:07.309024   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:07.309070   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:09.378517   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.06942074s)
	I0927 01:42:09.378550   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0927 01:42:09.378579   68676 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:09.378629   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:06.546731   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:09.044481   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.459395   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:07.959574   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.460234   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.959281   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.459240   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.959429   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.459865   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.959431   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.459459   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.020456   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:11.021689   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:10.030049   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 01:42:10.030100   68676 cache_images.go:123] Successfully loaded all cached images
	I0927 01:42:10.030106   68676 cache_images.go:92] duration metric: took 15.09209404s to LoadCachedImages
	I0927 01:42:10.030118   68676 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.31.1 crio true true} ...
	I0927 01:42:10.030211   68676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-521072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:42:10.030273   68676 ssh_runner.go:195] Run: crio config
	I0927 01:42:10.078318   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:10.078342   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:10.078351   68676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:42:10.078370   68676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521072 NodeName:no-preload-521072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:42:10.078506   68676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521072"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:42:10.078580   68676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:42:10.089137   68676 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:42:10.089212   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:42:10.098310   68676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 01:42:10.116172   68676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:42:10.134642   68676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0927 01:42:10.152442   68676 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0927 01:42:10.156477   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:42:10.169007   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:10.288382   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:10.306047   68676 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072 for IP: 192.168.50.246
	I0927 01:42:10.306077   68676 certs.go:194] generating shared ca certs ...
	I0927 01:42:10.306096   68676 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:10.306276   68676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:42:10.306331   68676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:42:10.306350   68676 certs.go:256] generating profile certs ...
	I0927 01:42:10.306453   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/client.key
	I0927 01:42:10.306553   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key.735097eb
	I0927 01:42:10.306613   68676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key
	I0927 01:42:10.306761   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:42:10.306797   68676 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:42:10.306808   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:42:10.306833   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:42:10.306854   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:42:10.306878   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:42:10.306916   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:42:10.307598   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:42:10.344570   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:42:10.386834   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:42:10.432022   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:42:10.462348   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:42:10.490015   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:42:10.518144   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:42:10.545290   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:42:10.572460   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:42:10.597526   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:42:10.622287   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:42:10.646020   68676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:42:10.662972   68676 ssh_runner.go:195] Run: openssl version
	I0927 01:42:10.668844   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:42:10.680020   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684620   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684678   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.690694   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:42:10.702115   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:42:10.713424   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717918   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717971   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.723601   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:42:10.734870   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:42:10.747370   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752016   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752072   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.757964   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:42:10.769560   68676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:42:10.774457   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:42:10.780719   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:42:10.786653   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:42:10.792671   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:42:10.798674   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:42:10.804910   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:42:10.811007   68676 kubeadm.go:392] StartCluster: {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:42:10.811114   68676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:42:10.811178   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.851017   68676 cri.go:89] found id: ""
	I0927 01:42:10.851084   68676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:42:10.864997   68676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:42:10.865016   68676 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:42:10.865062   68676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:42:10.877088   68676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:42:10.878133   68676 kubeconfig.go:125] found "no-preload-521072" server: "https://192.168.50.246:8443"
	I0927 01:42:10.880637   68676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:42:10.893554   68676 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.246
	I0927 01:42:10.893578   68676 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:42:10.893592   68676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:42:10.893629   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.935734   68676 cri.go:89] found id: ""
	I0927 01:42:10.935794   68676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:42:10.954141   68676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:42:10.965345   68676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:42:10.965363   68676 kubeadm.go:157] found existing configuration files:
	
	I0927 01:42:10.965413   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:42:10.975561   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:42:10.975628   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:42:10.985747   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:42:10.995026   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:42:10.995089   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:42:11.006650   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.016964   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:42:11.017034   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.028756   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:42:11.039002   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:42:11.039072   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:42:11.050382   68676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:42:11.060839   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:11.177447   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.481118   68676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.303633907s)
	I0927 01:42:12.481149   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.706344   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.774938   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.866467   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:42:12.866552   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.366860   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.866951   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.882411   68676 api_server.go:72] duration metric: took 1.015943274s to wait for apiserver process to appear ...
	I0927 01:42:13.882435   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:42:13.882457   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:13.882963   68676 api_server.go:269] stopped: https://192.168.50.246:8443/healthz: Get "https://192.168.50.246:8443/healthz": dial tcp 192.168.50.246:8443: connect: connection refused
	I0927 01:42:14.382489   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:11.543818   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:14.042536   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:12.459771   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:12.959727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.459428   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.959255   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.460003   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.959853   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.460237   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.959974   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.459420   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.959321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.527793   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:16.023080   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.124839   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:42:17.124867   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:42:17.124885   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.174869   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.174905   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.383128   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.389594   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.389629   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.883197   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.888706   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.888734   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.382982   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.387847   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.387877   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.882844   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.887144   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.887178   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.382711   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.388007   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.388037   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.882613   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.886781   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.886801   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:20.382907   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:20.387083   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:42:20.393697   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:42:20.393725   68676 api_server.go:131] duration metric: took 6.511280572s to wait for apiserver health ...
	I0927 01:42:20.393735   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:20.393743   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:20.395270   68676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:42:16.543525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:19.041726   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:20.396770   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:42:20.407891   68676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:42:20.427815   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:42:20.436940   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:42:20.436980   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:42:20.436989   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:42:20.436997   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:42:20.437005   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:42:20.437012   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:42:20.437020   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:42:20.437032   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:42:20.437038   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:42:20.437049   68676 system_pods.go:74] duration metric: took 9.213874ms to wait for pod list to return data ...
	I0927 01:42:20.437057   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:42:20.440323   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:42:20.440345   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:42:20.440356   68676 node_conditions.go:105] duration metric: took 3.294768ms to run NodePressure ...
	I0927 01:42:20.440372   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:20.710186   68676 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713940   68676 kubeadm.go:739] kubelet initialised
	I0927 01:42:20.713958   68676 kubeadm.go:740] duration metric: took 3.749241ms waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713965   68676 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:20.718807   68676 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.722955   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722976   68676 pod_ready.go:82] duration metric: took 4.147896ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.722984   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722991   68676 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.727569   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727596   68676 pod_ready.go:82] duration metric: took 4.598426ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.727604   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727611   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.731845   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731871   68676 pod_ready.go:82] duration metric: took 4.25326ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.731881   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731889   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.830881   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830909   68676 pod_ready.go:82] duration metric: took 99.009569ms for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.830918   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830923   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.232434   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232463   68676 pod_ready.go:82] duration metric: took 401.530413ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.232473   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232485   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.630791   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630818   68676 pod_ready.go:82] duration metric: took 398.325039ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.630829   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630836   68676 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:22.032173   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032200   68676 pod_ready.go:82] duration metric: took 401.353533ms for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:22.032208   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032215   68676 pod_ready.go:39] duration metric: took 1.318241972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:22.032233   68676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:42:22.046872   68676 ops.go:34] apiserver oom_adj: -16
	I0927 01:42:22.046898   68676 kubeadm.go:597] duration metric: took 11.181875532s to restartPrimaryControlPlane
	I0927 01:42:22.046908   68676 kubeadm.go:394] duration metric: took 11.235909243s to StartCluster
	I0927 01:42:22.046923   68676 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.046984   68676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:42:22.048611   68676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.048864   68676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:42:22.048932   68676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:42:22.049029   68676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-521072"
	I0927 01:42:22.049050   68676 addons.go:234] Setting addon storage-provisioner=true in "no-preload-521072"
	W0927 01:42:22.049060   68676 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:42:22.049066   68676 addons.go:69] Setting default-storageclass=true in profile "no-preload-521072"
	I0927 01:42:22.049088   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049092   68676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521072"
	I0927 01:42:22.049096   68676 addons.go:69] Setting metrics-server=true in profile "no-preload-521072"
	I0927 01:42:22.049117   68676 addons.go:234] Setting addon metrics-server=true in "no-preload-521072"
	I0927 01:42:22.049123   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0927 01:42:22.049134   68676 addons.go:243] addon metrics-server should already be in state true
	I0927 01:42:22.049167   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049423   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049455   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049478   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049507   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049535   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049555   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.050564   68676 out.go:177] * Verifying Kubernetes components...
	I0927 01:42:22.051717   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:22.088020   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0927 01:42:22.088454   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.088964   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.088985   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.089333   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.089793   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.089825   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.091735   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I0927 01:42:22.091853   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0927 01:42:22.092236   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092295   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092659   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092677   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.092817   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092840   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.093170   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093344   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093387   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.093922   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.093949   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.097310   68676 addons.go:234] Setting addon default-storageclass=true in "no-preload-521072"
	W0927 01:42:22.097333   68676 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:42:22.097368   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.097705   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.097747   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.110628   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0927 01:42:22.111053   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.111604   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.111629   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.112113   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.112329   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.113354   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0927 01:42:22.114009   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.114749   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.115666   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.115690   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.116105   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.116374   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.116862   68676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:42:22.118124   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.118135   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:42:22.118162   68676 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:42:22.118180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.119866   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0927 01:42:22.120317   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.120908   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.120931   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.121113   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.121319   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.121556   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.121576   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.122025   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.122051   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.122280   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.122487   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.122652   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.122781   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.126076   68676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:17.459443   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:17.959426   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.460250   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.959989   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.459981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.959969   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.459758   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.959440   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.460115   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.959238   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.521751   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:21.020226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.021393   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.127430   68676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.127446   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:42:22.127460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.130498   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131040   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.131061   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131357   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.131544   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.131670   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.131997   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.138657   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0927 01:42:22.138987   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.139420   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.139438   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.139824   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.139998   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.141454   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.141664   68676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.141673   68676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:42:22.141683   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.144221   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.144670   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.144931   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.145071   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.145208   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.244289   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:22.261345   68676 node_ready.go:35] waiting up to 6m0s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:22.365923   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:42:22.365953   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:42:22.387392   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.389353   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:42:22.389379   68676 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:42:22.406994   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.491559   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:22.491581   68676 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:42:22.586476   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:23.660676   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273241029s)
	I0927 01:42:23.660733   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660750   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660732   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.253706672s)
	I0927 01:42:23.660831   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660841   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660851   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.074315804s)
	I0927 01:42:23.661081   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661098   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661109   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661108   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661118   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661153   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661205   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661161   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661223   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661230   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661238   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661125   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661607   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661608   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661621   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661631   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661632   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661637   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661641   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661645   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661649   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661650   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661653   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661852   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661866   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661874   68676 addons.go:475] Verifying addon metrics-server=true in "no-preload-521072"
	I0927 01:42:23.661917   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.668484   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.668499   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.668711   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.668726   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.668743   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.670758   68676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0927 01:42:23.672072   68676 addons.go:510] duration metric: took 1.62313879s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0927 01:42:24.265426   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.043831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:25.546335   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.460161   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:22.959177   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.459481   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.959221   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:23.959322   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:24.004970   69333 cri.go:89] found id: ""
	I0927 01:42:24.004999   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.005010   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:24.005017   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:24.005076   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:24.041880   69333 cri.go:89] found id: ""
	I0927 01:42:24.041908   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.041919   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:24.041926   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:24.041991   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:24.082295   69333 cri.go:89] found id: ""
	I0927 01:42:24.082318   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.082325   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:24.082331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:24.082385   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:24.119663   69333 cri.go:89] found id: ""
	I0927 01:42:24.119692   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.119707   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:24.119714   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:24.119771   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:24.163893   69333 cri.go:89] found id: ""
	I0927 01:42:24.163920   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.163932   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:24.163940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:24.163999   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:24.200277   69333 cri.go:89] found id: ""
	I0927 01:42:24.200299   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.200307   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:24.200312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:24.200365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:24.235039   69333 cri.go:89] found id: ""
	I0927 01:42:24.235059   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.235066   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:24.235072   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:24.235132   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:24.275160   69333 cri.go:89] found id: ""
	I0927 01:42:24.275181   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.275188   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:24.275196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:24.275206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:24.327432   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:24.327465   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:24.341113   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:24.341139   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:24.473741   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:24.473764   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:24.473779   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:24.545888   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:24.545923   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:27.086673   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:27.100552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:27.100623   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:27.136182   69333 cri.go:89] found id: ""
	I0927 01:42:27.136207   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.136215   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:27.136221   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:27.136289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:27.173258   69333 cri.go:89] found id: ""
	I0927 01:42:27.173285   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.173296   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:27.173303   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:27.173373   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:27.210481   69333 cri.go:89] found id: ""
	I0927 01:42:27.210514   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.210526   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:27.210533   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:27.210586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:27.245168   69333 cri.go:89] found id: ""
	I0927 01:42:27.245192   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.245200   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:27.245206   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:27.245252   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:27.280494   69333 cri.go:89] found id: ""
	I0927 01:42:27.280522   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.280531   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:27.280538   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:27.280596   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:27.314281   69333 cri.go:89] found id: ""
	I0927 01:42:27.314307   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.314316   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:27.314322   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:27.314392   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:25.521413   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.019989   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:26.764721   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:27.765574   68676 node_ready.go:49] node "no-preload-521072" has status "Ready":"True"
	I0927 01:42:27.765597   68676 node_ready.go:38] duration metric: took 5.504217374s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:27.765609   68676 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:27.772263   68676 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777521   68676 pod_ready.go:93] pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.777544   68676 pod_ready.go:82] duration metric: took 5.252259ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777552   68676 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781511   68676 pod_ready.go:93] pod "etcd-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.781528   68676 pod_ready.go:82] duration metric: took 3.970559ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781535   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785556   68676 pod_ready.go:93] pod "kube-apiserver-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.785572   68676 pod_ready.go:82] duration metric: took 4.032023ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785579   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:29.792899   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.041166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:30.041766   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:27.350838   69333 cri.go:89] found id: ""
	I0927 01:42:27.350861   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.350869   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:27.350874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:27.350921   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:27.390146   69333 cri.go:89] found id: ""
	I0927 01:42:27.390175   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.390186   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:27.390196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:27.390206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:27.446727   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:27.446756   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:27.461337   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:27.461365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:27.533818   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:27.533839   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:27.533874   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:27.614325   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:27.614357   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.161303   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:30.179521   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:30.179590   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:30.221738   69333 cri.go:89] found id: ""
	I0927 01:42:30.221764   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.221772   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:30.221778   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:30.221841   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:30.258316   69333 cri.go:89] found id: ""
	I0927 01:42:30.258349   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.258359   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:30.258369   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:30.258427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:30.297079   69333 cri.go:89] found id: ""
	I0927 01:42:30.297102   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.297109   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:30.297114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:30.297159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:30.337969   69333 cri.go:89] found id: ""
	I0927 01:42:30.337995   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.338007   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:30.338014   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:30.338075   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:30.375946   69333 cri.go:89] found id: ""
	I0927 01:42:30.375975   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.375986   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:30.375993   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:30.376054   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:30.411673   69333 cri.go:89] found id: ""
	I0927 01:42:30.411700   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.411710   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:30.411718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:30.411765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:30.447784   69333 cri.go:89] found id: ""
	I0927 01:42:30.447812   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.447822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:30.447830   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:30.447890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:30.483164   69333 cri.go:89] found id: ""
	I0927 01:42:30.483191   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.483202   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:30.483213   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:30.483229   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:30.533490   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:30.533522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:30.547688   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:30.547722   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:30.626696   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:30.626720   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:30.626733   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:30.708767   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:30.708809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.020786   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.021243   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.292370   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.791420   68676 pod_ready.go:93] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.791444   68676 pod_ready.go:82] duration metric: took 5.00585892s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.791454   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796509   68676 pod_ready.go:93] pod "kube-proxy-wkcb8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.796528   68676 pod_ready.go:82] duration metric: took 5.067798ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796536   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801041   68676 pod_ready.go:93] pod "kube-scheduler-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.801066   68676 pod_ready.go:82] duration metric: took 4.523416ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801087   68676 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:34.807359   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.042216   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:34.541390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:33.250034   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:33.263733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:33.263805   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:33.298038   69333 cri.go:89] found id: ""
	I0927 01:42:33.298063   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.298071   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:33.298077   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:33.298139   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:33.338027   69333 cri.go:89] found id: ""
	I0927 01:42:33.338050   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.338058   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:33.338064   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:33.338118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:33.376470   69333 cri.go:89] found id: ""
	I0927 01:42:33.376496   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.376504   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:33.376509   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:33.376568   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:33.419831   69333 cri.go:89] found id: ""
	I0927 01:42:33.419859   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.419868   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:33.419874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:33.419929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:33.461029   69333 cri.go:89] found id: ""
	I0927 01:42:33.461057   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.461076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:33.461085   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:33.461158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:33.499968   69333 cri.go:89] found id: ""
	I0927 01:42:33.499996   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.500007   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:33.500015   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:33.500073   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:33.552601   69333 cri.go:89] found id: ""
	I0927 01:42:33.552625   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.552633   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:33.552640   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:33.552702   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:33.589491   69333 cri.go:89] found id: ""
	I0927 01:42:33.589520   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.589529   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:33.589540   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:33.589554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:33.643437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:33.643470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:33.657819   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:33.657846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:33.728369   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:33.728393   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:33.728407   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:33.803661   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:33.803691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:36.343598   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:36.357879   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:36.357937   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:36.398936   69333 cri.go:89] found id: ""
	I0927 01:42:36.398958   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.398966   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:36.398971   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:36.399016   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:36.438897   69333 cri.go:89] found id: ""
	I0927 01:42:36.438921   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.438928   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:36.438935   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:36.438979   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:36.476779   69333 cri.go:89] found id: ""
	I0927 01:42:36.476807   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.476817   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:36.476824   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:36.476882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:36.514216   69333 cri.go:89] found id: ""
	I0927 01:42:36.514238   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.514245   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:36.514251   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:36.514306   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:36.551800   69333 cri.go:89] found id: ""
	I0927 01:42:36.551827   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.551835   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:36.551841   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:36.551900   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:36.592060   69333 cri.go:89] found id: ""
	I0927 01:42:36.592086   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.592096   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:36.592101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:36.592172   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:36.633485   69333 cri.go:89] found id: ""
	I0927 01:42:36.633507   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.633514   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:36.633519   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:36.633571   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:36.667288   69333 cri.go:89] found id: ""
	I0927 01:42:36.667355   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.667366   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:36.667377   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:36.667391   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:36.722230   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:36.722263   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:36.735927   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:36.735952   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:36.808852   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:36.808872   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:36.808887   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:36.889259   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:36.889299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:34.520143   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.521254   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.808388   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.308743   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.542085   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.042119   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.438818   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:39.459082   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:39.459150   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:39.499966   69333 cri.go:89] found id: ""
	I0927 01:42:39.499991   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.499999   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:39.500004   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:39.500050   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:39.540828   69333 cri.go:89] found id: ""
	I0927 01:42:39.540850   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.540857   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:39.540864   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:39.540972   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:39.575841   69333 cri.go:89] found id: ""
	I0927 01:42:39.575868   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.575879   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:39.575886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:39.575958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:39.611105   69333 cri.go:89] found id: ""
	I0927 01:42:39.611184   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.611202   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:39.611212   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:39.611268   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:39.644772   69333 cri.go:89] found id: ""
	I0927 01:42:39.644800   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.644808   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:39.644813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:39.644868   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:39.679875   69333 cri.go:89] found id: ""
	I0927 01:42:39.679901   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.679912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:39.679919   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:39.679987   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:39.716410   69333 cri.go:89] found id: ""
	I0927 01:42:39.716440   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.716450   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:39.716457   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:39.716525   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:39.750391   69333 cri.go:89] found id: ""
	I0927 01:42:39.750418   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.750428   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:39.750439   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:39.750455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:39.822365   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:39.822401   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:39.822416   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:39.905982   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:39.906017   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:39.952310   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:39.952339   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:40.000523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:40.000554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:39.021945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.519787   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.807532   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:44.307548   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.042260   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:43.042762   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.542112   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:42.514379   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:42.528312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:42.528377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:42.562427   69333 cri.go:89] found id: ""
	I0927 01:42:42.562455   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.562463   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:42.562469   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:42.562526   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:42.599969   69333 cri.go:89] found id: ""
	I0927 01:42:42.599993   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.600002   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:42.600007   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:42.600053   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:42.636338   69333 cri.go:89] found id: ""
	I0927 01:42:42.636364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.636371   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:42.636376   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:42.636431   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:42.670781   69333 cri.go:89] found id: ""
	I0927 01:42:42.670809   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.670818   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:42.670823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:42.670880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:42.707334   69333 cri.go:89] found id: ""
	I0927 01:42:42.707364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.707375   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:42.707431   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:42.707503   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:42.743063   69333 cri.go:89] found id: ""
	I0927 01:42:42.743092   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.743103   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:42.743139   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:42.743192   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:42.778593   69333 cri.go:89] found id: ""
	I0927 01:42:42.778617   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.778628   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:42.778634   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:42.778700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:42.814261   69333 cri.go:89] found id: ""
	I0927 01:42:42.814286   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.814293   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:42.814300   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:42.814310   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:42.863982   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:42.864011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:42.877151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:42.877175   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:42.959233   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:42.959251   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:42.959262   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:43.038773   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:43.038805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:45.581272   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:45.596103   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:45.596167   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:45.639507   69333 cri.go:89] found id: ""
	I0927 01:42:45.639531   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.639539   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:45.639545   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:45.639611   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:45.678455   69333 cri.go:89] found id: ""
	I0927 01:42:45.678482   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.678489   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:45.678495   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:45.678539   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:45.722094   69333 cri.go:89] found id: ""
	I0927 01:42:45.722123   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.722135   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:45.722142   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:45.722211   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:45.758091   69333 cri.go:89] found id: ""
	I0927 01:42:45.758118   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.758127   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:45.758133   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:45.758183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:45.792976   69333 cri.go:89] found id: ""
	I0927 01:42:45.793010   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.793021   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:45.793028   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:45.793089   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:45.830235   69333 cri.go:89] found id: ""
	I0927 01:42:45.830262   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.830273   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:45.830280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:45.830324   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:45.865896   69333 cri.go:89] found id: ""
	I0927 01:42:45.865928   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.865938   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:45.865946   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:45.866000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:45.900058   69333 cri.go:89] found id: ""
	I0927 01:42:45.900088   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.900099   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:45.900108   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:45.900119   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:45.972986   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:45.973015   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:45.973030   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:46.048703   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:46.048732   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:46.087483   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:46.087515   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:46.136833   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:46.136866   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:43.520998   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.522532   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.020912   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:46.307637   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.808963   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.041757   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:50.042259   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.650738   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:48.665847   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:48.665930   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:48.704304   69333 cri.go:89] found id: ""
	I0927 01:42:48.704328   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.704337   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:48.704342   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:48.704402   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:48.742469   69333 cri.go:89] found id: ""
	I0927 01:42:48.742499   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.742510   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:48.742517   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:48.742579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:48.782154   69333 cri.go:89] found id: ""
	I0927 01:42:48.782183   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.782194   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:48.782201   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:48.782261   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:48.821686   69333 cri.go:89] found id: ""
	I0927 01:42:48.821709   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.821717   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:48.821723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:48.821781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:48.867072   69333 cri.go:89] found id: ""
	I0927 01:42:48.867099   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.867109   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:48.867123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:48.867191   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:48.908215   69333 cri.go:89] found id: ""
	I0927 01:42:48.908241   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.908249   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:48.908255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:48.908312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:48.945260   69333 cri.go:89] found id: ""
	I0927 01:42:48.945291   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.945303   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:48.945310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:48.945375   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:48.983285   69333 cri.go:89] found id: ""
	I0927 01:42:48.983325   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.983333   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:48.983343   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:48.983354   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:49.039437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:49.039472   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:49.053546   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:49.053571   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:49.129264   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:49.129286   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:49.129299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:49.216967   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:49.216999   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:51.758143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:51.771417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:51.771485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:51.806120   69333 cri.go:89] found id: ""
	I0927 01:42:51.806144   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.806154   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:51.806161   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:51.806219   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:51.840301   69333 cri.go:89] found id: ""
	I0927 01:42:51.840330   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.840340   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:51.840348   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:51.840410   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:51.874908   69333 cri.go:89] found id: ""
	I0927 01:42:51.874934   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.874944   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:51.874952   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:51.875018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:51.910960   69333 cri.go:89] found id: ""
	I0927 01:42:51.910988   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.910999   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:51.911006   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:51.911064   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:51.945206   69333 cri.go:89] found id: ""
	I0927 01:42:51.945228   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.945236   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:51.945241   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:51.945289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:51.979262   69333 cri.go:89] found id: ""
	I0927 01:42:51.979296   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.979322   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:51.979328   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:51.979384   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:52.013407   69333 cri.go:89] found id: ""
	I0927 01:42:52.013438   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.013449   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:52.013456   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:52.013510   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:52.048928   69333 cri.go:89] found id: ""
	I0927 01:42:52.048951   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.048961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:52.048970   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:52.048984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:52.101043   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:52.101083   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:52.115903   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:52.115938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:52.197147   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:52.197168   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:52.197184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:52.276352   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:52.276393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:50.021730   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.520362   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:51.306847   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:53.307714   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.042729   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.544118   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.819649   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:54.832262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:54.832344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:54.867495   69333 cri.go:89] found id: ""
	I0927 01:42:54.867523   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.867533   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:54.867539   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:54.867585   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:54.899705   69333 cri.go:89] found id: ""
	I0927 01:42:54.899732   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.899742   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:54.899749   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:54.899817   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:54.939216   69333 cri.go:89] found id: ""
	I0927 01:42:54.939235   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.939244   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:54.939249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:54.939293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:54.976603   69333 cri.go:89] found id: ""
	I0927 01:42:54.976632   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.976643   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:54.976651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:54.976718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:55.011617   69333 cri.go:89] found id: ""
	I0927 01:42:55.011649   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.011660   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:55.011667   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:55.011729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:55.048836   69333 cri.go:89] found id: ""
	I0927 01:42:55.048861   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.048869   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:55.048885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:55.048955   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:55.085105   69333 cri.go:89] found id: ""
	I0927 01:42:55.085133   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.085144   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:55.085151   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:55.085205   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:55.122536   69333 cri.go:89] found id: ""
	I0927 01:42:55.122564   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.122575   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:55.122585   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:55.122600   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:55.197191   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:55.197216   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:55.197230   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:55.275914   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:55.275950   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:55.315043   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:55.315071   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:55.365808   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:55.365846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:55.025083   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.520041   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:55.807377   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.807419   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.808202   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.042511   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.541628   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.880934   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:57.894276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:57.894337   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:57.933299   69333 cri.go:89] found id: ""
	I0927 01:42:57.933326   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.933336   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:57.933343   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:57.933403   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:57.969070   69333 cri.go:89] found id: ""
	I0927 01:42:57.969094   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.969102   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:57.969107   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:57.969151   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:58.009432   69333 cri.go:89] found id: ""
	I0927 01:42:58.009453   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.009462   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:58.009468   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:58.009524   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:58.046507   69333 cri.go:89] found id: ""
	I0927 01:42:58.046526   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.046533   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:58.046539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:58.046603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:58.079910   69333 cri.go:89] found id: ""
	I0927 01:42:58.079936   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.079947   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:58.079954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:58.080015   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:58.115971   69333 cri.go:89] found id: ""
	I0927 01:42:58.115994   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.116001   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:58.116007   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:58.116065   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:58.150512   69333 cri.go:89] found id: ""
	I0927 01:42:58.150536   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.150544   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:58.150549   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:58.150608   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:58.183458   69333 cri.go:89] found id: ""
	I0927 01:42:58.183487   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.183498   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:58.183506   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:58.183520   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:58.234404   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:58.234434   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:58.248387   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:58.248411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:58.320751   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:58.320772   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:58.320783   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:58.401163   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:58.401212   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:00.943677   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:00.956739   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:00.956815   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:00.991020   69333 cri.go:89] found id: ""
	I0927 01:43:00.991042   69333 logs.go:276] 0 containers: []
	W0927 01:43:00.991051   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:00.991056   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:00.991113   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:01.031686   69333 cri.go:89] found id: ""
	I0927 01:43:01.031711   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.031720   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:01.031726   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:01.031786   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:01.068783   69333 cri.go:89] found id: ""
	I0927 01:43:01.068813   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.068824   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:01.068831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:01.068890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:01.108434   69333 cri.go:89] found id: ""
	I0927 01:43:01.108456   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.108464   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:01.108469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:01.108513   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:01.147574   69333 cri.go:89] found id: ""
	I0927 01:43:01.147596   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.147604   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:01.147610   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:01.147660   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:01.188251   69333 cri.go:89] found id: ""
	I0927 01:43:01.188279   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.188290   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:01.188297   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:01.188359   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:01.224901   69333 cri.go:89] found id: ""
	I0927 01:43:01.224944   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.224964   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:01.224974   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:01.225052   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:01.262701   69333 cri.go:89] found id: ""
	I0927 01:43:01.262728   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.262738   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:01.262749   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:01.262762   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:01.313872   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:01.313900   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:01.327809   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:01.327835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:01.400864   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:01.400895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:01.400909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:01.478012   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:01.478045   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:59.520973   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.522457   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:02.308215   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.309111   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.543151   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.043201   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.018634   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:04.032732   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:04.032803   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:04.075258   69333 cri.go:89] found id: ""
	I0927 01:43:04.075285   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.075293   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:04.075299   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:04.075381   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:04.108738   69333 cri.go:89] found id: ""
	I0927 01:43:04.108764   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.108773   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:04.108779   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:04.108835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:04.142115   69333 cri.go:89] found id: ""
	I0927 01:43:04.142145   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.142155   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:04.142174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:04.142249   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:04.184606   69333 cri.go:89] found id: ""
	I0927 01:43:04.184626   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.184634   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:04.184639   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:04.184684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:04.218391   69333 cri.go:89] found id: ""
	I0927 01:43:04.218420   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.218428   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:04.218434   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:04.218482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.253796   69333 cri.go:89] found id: ""
	I0927 01:43:04.253816   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.253824   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:04.253829   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:04.253884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:04.289147   69333 cri.go:89] found id: ""
	I0927 01:43:04.289170   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.289179   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:04.289184   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:04.289245   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:04.329000   69333 cri.go:89] found id: ""
	I0927 01:43:04.329026   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.329034   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:04.329042   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:04.329053   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:04.424255   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:04.424290   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:04.470746   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:04.470775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:04.524208   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:04.524237   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:04.538338   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:04.538365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:04.608713   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.109492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:07.124253   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:07.124332   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:07.160443   69333 cri.go:89] found id: ""
	I0927 01:43:07.160470   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.160481   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:07.160488   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:07.160554   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:07.195492   69333 cri.go:89] found id: ""
	I0927 01:43:07.195515   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.195522   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:07.195527   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:07.195572   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:07.237678   69333 cri.go:89] found id: ""
	I0927 01:43:07.237708   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.237718   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:07.237725   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:07.237792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:07.274239   69333 cri.go:89] found id: ""
	I0927 01:43:07.274268   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.274279   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:07.274286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:07.274352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:07.315099   69333 cri.go:89] found id: ""
	I0927 01:43:07.315124   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.315131   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:07.315137   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:07.315190   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.020911   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.520371   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.807124   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.306568   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.543210   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.042166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:07.356301   69333 cri.go:89] found id: ""
	I0927 01:43:07.356328   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.356339   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:07.356347   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:07.356416   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:07.392204   69333 cri.go:89] found id: ""
	I0927 01:43:07.392232   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.392242   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:07.392255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:07.392312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:07.428924   69333 cri.go:89] found id: ""
	I0927 01:43:07.428952   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.428961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:07.428969   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:07.428981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:07.502507   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.502531   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:07.502545   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:07.584169   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:07.584201   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:07.623413   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:07.623446   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:07.675444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:07.675480   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.190164   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:10.205315   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:10.205395   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:10.244030   69333 cri.go:89] found id: ""
	I0927 01:43:10.244053   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.244063   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:10.244071   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:10.244134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:10.280081   69333 cri.go:89] found id: ""
	I0927 01:43:10.280108   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.280118   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:10.280125   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:10.280184   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:10.315428   69333 cri.go:89] found id: ""
	I0927 01:43:10.315454   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.315464   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:10.315471   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:10.315531   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:10.352536   69333 cri.go:89] found id: ""
	I0927 01:43:10.352560   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.352567   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:10.352574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:10.352634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:10.388846   69333 cri.go:89] found id: ""
	I0927 01:43:10.388870   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.388880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:10.388887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:10.388951   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:10.427746   69333 cri.go:89] found id: ""
	I0927 01:43:10.427771   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.427779   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:10.427784   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:10.427839   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:10.473126   69333 cri.go:89] found id: ""
	I0927 01:43:10.473155   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.473166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:10.473172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:10.473234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:10.511925   69333 cri.go:89] found id: ""
	I0927 01:43:10.511954   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.511962   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:10.511971   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:10.511984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:10.551428   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:10.551459   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:10.603655   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:10.603691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.617232   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:10.617266   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:10.696559   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:10.696585   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:10.696599   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:09.020784   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.521429   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.307081   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.307876   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.043819   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.543289   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.273888   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:13.288271   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:13.288349   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:13.325796   69333 cri.go:89] found id: ""
	I0927 01:43:13.325823   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.325831   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:13.325837   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:13.325893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:13.360721   69333 cri.go:89] found id: ""
	I0927 01:43:13.360748   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.360756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:13.360762   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:13.360821   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:13.399722   69333 cri.go:89] found id: ""
	I0927 01:43:13.399749   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.399756   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:13.399762   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:13.399826   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:13.437161   69333 cri.go:89] found id: ""
	I0927 01:43:13.437187   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.437194   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:13.437200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:13.437260   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:13.474735   69333 cri.go:89] found id: ""
	I0927 01:43:13.474758   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.474766   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:13.474771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:13.474822   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:13.528726   69333 cri.go:89] found id: ""
	I0927 01:43:13.528754   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.528764   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:13.528771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:13.528837   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:13.568617   69333 cri.go:89] found id: ""
	I0927 01:43:13.568642   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.568651   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:13.568658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:13.568726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:13.605820   69333 cri.go:89] found id: ""
	I0927 01:43:13.605846   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.605857   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:13.605868   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:13.605883   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:13.682586   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:13.682609   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:13.682624   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:13.764487   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:13.764522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:13.809248   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:13.809280   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:13.861331   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:13.861371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:16.376981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:16.391787   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:16.391842   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:16.432731   69333 cri.go:89] found id: ""
	I0927 01:43:16.432758   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.432767   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:16.432775   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:16.432836   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:16.466769   69333 cri.go:89] found id: ""
	I0927 01:43:16.466798   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.466806   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:16.466812   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:16.466860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:16.501899   69333 cri.go:89] found id: ""
	I0927 01:43:16.501927   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.501940   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:16.501947   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:16.502000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:16.537356   69333 cri.go:89] found id: ""
	I0927 01:43:16.537383   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.537393   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:16.537401   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:16.537460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:16.573910   69333 cri.go:89] found id: ""
	I0927 01:43:16.573937   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.573946   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:16.573951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:16.574003   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:16.617780   69333 cri.go:89] found id: ""
	I0927 01:43:16.617808   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.617818   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:16.617826   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:16.617884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:16.653262   69333 cri.go:89] found id: ""
	I0927 01:43:16.653311   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.653323   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:16.653331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:16.653394   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:16.689861   69333 cri.go:89] found id: ""
	I0927 01:43:16.689889   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.689898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:16.689909   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:16.689922   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:16.765961   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:16.765986   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:16.766001   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:16.845195   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:16.845227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:16.889159   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:16.889188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:16.945523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:16.945558   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:13.522444   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.021202   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:15.808665   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.307884   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.043071   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.541709   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:19.461132   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:19.475148   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:19.475234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:19.511487   69333 cri.go:89] found id: ""
	I0927 01:43:19.511509   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.511517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:19.511522   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:19.511580   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:19.545726   69333 cri.go:89] found id: ""
	I0927 01:43:19.545750   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.545756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:19.545763   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:19.545830   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:19.581287   69333 cri.go:89] found id: ""
	I0927 01:43:19.581310   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.581318   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:19.581323   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:19.581376   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:19.614179   69333 cri.go:89] found id: ""
	I0927 01:43:19.614205   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.614215   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:19.614223   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:19.614286   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:19.648276   69333 cri.go:89] found id: ""
	I0927 01:43:19.648307   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.648318   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:19.648330   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:19.648390   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:19.683051   69333 cri.go:89] found id: ""
	I0927 01:43:19.683083   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.683094   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:19.683114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:19.683166   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:19.716664   69333 cri.go:89] found id: ""
	I0927 01:43:19.716686   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.716694   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:19.716700   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:19.716745   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:19.758948   69333 cri.go:89] found id: ""
	I0927 01:43:19.758969   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.758976   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:19.758984   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:19.758996   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:19.797751   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:19.797777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:19.853605   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:19.853635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:19.867785   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:19.867815   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:19.950323   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:19.950350   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:19.950363   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:18.520291   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.520845   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.520886   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.808171   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.812047   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:21.043160   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:23.546462   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.555421   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:22.570013   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:22.570077   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:22.605007   69333 cri.go:89] found id: ""
	I0927 01:43:22.605034   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.605055   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:22.605062   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:22.605122   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:22.640350   69333 cri.go:89] found id: ""
	I0927 01:43:22.640381   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.640391   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:22.640406   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:22.640482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:22.677464   69333 cri.go:89] found id: ""
	I0927 01:43:22.677489   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.677499   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:22.677506   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:22.677567   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:22.721978   69333 cri.go:89] found id: ""
	I0927 01:43:22.722017   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.722025   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:22.722032   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:22.722093   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:22.757694   69333 cri.go:89] found id: ""
	I0927 01:43:22.757720   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.757729   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:22.757733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:22.757781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:22.793872   69333 cri.go:89] found id: ""
	I0927 01:43:22.793903   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.793912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:22.793920   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:22.793971   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:22.830620   69333 cri.go:89] found id: ""
	I0927 01:43:22.830652   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.830662   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:22.830669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:22.830732   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:22.867341   69333 cri.go:89] found id: ""
	I0927 01:43:22.867370   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.867381   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:22.867392   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:22.867405   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:22.939592   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:22.939630   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:22.939654   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:23.016407   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:23.016447   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:23.058490   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:23.058522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:23.109527   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:23.109560   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:25.626109   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:25.645254   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:25.645343   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:25.707951   69333 cri.go:89] found id: ""
	I0927 01:43:25.707979   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.707989   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:25.707997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:25.708057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:25.771210   69333 cri.go:89] found id: ""
	I0927 01:43:25.771234   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.771242   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:25.771248   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:25.771295   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:25.808206   69333 cri.go:89] found id: ""
	I0927 01:43:25.808235   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.808245   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:25.808252   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:25.808311   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:25.842236   69333 cri.go:89] found id: ""
	I0927 01:43:25.842265   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.842275   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:25.842283   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:25.842328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:25.879220   69333 cri.go:89] found id: ""
	I0927 01:43:25.879248   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.879256   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:25.879262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:25.879333   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:25.913491   69333 cri.go:89] found id: ""
	I0927 01:43:25.913522   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.913532   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:25.913537   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:25.913595   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:25.946867   69333 cri.go:89] found id: ""
	I0927 01:43:25.946887   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.946894   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:25.946899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:25.946943   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:25.983792   69333 cri.go:89] found id: ""
	I0927 01:43:25.983813   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.983820   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:25.983828   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:25.983838   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:26.030169   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:26.030195   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:26.083242   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:26.083276   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:26.097109   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:26.097136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:26.168675   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:26.168703   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:26.168715   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:24.521923   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.020053   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:25.308150   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.308307   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:29.308818   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:26.042436   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.541895   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:30.542444   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.750349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:28.765211   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:28.765269   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:28.804760   69333 cri.go:89] found id: ""
	I0927 01:43:28.804784   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.804792   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:28.804798   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:28.804865   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:28.842576   69333 cri.go:89] found id: ""
	I0927 01:43:28.842597   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.842604   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:28.842612   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:28.842674   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:28.877498   69333 cri.go:89] found id: ""
	I0927 01:43:28.877529   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.877541   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:28.877553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:28.877615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:28.912583   69333 cri.go:89] found id: ""
	I0927 01:43:28.912609   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.912620   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:28.912627   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:28.912689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:28.947995   69333 cri.go:89] found id: ""
	I0927 01:43:28.948019   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.948030   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:28.948037   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:28.948135   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:28.984445   69333 cri.go:89] found id: ""
	I0927 01:43:28.984470   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.984480   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:28.984488   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:28.984551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:29.020345   69333 cri.go:89] found id: ""
	I0927 01:43:29.020374   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.020385   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:29.020392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:29.020451   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:29.056204   69333 cri.go:89] found id: ""
	I0927 01:43:29.056234   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.056245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:29.056257   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:29.056270   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:29.127936   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:29.127963   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:29.127980   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:29.205933   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:29.205981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.248745   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:29.248777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:29.302316   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:29.302348   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:31.817566   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:31.831179   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:31.831253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:31.868480   69333 cri.go:89] found id: ""
	I0927 01:43:31.868507   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.868517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:31.868528   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:31.868588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:31.901656   69333 cri.go:89] found id: ""
	I0927 01:43:31.901684   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.901694   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:31.901701   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:31.901761   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:31.937101   69333 cri.go:89] found id: ""
	I0927 01:43:31.937133   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.937145   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:31.937153   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:31.937210   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:31.970724   69333 cri.go:89] found id: ""
	I0927 01:43:31.970750   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.970761   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:31.970768   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:31.970835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:32.003704   69333 cri.go:89] found id: ""
	I0927 01:43:32.003736   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.003747   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:32.003754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:32.003813   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:32.038840   69333 cri.go:89] found id: ""
	I0927 01:43:32.038869   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.038879   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:32.038886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:32.038946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:32.075506   69333 cri.go:89] found id: ""
	I0927 01:43:32.075534   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.075545   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:32.075552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:32.075603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:32.112983   69333 cri.go:89] found id: ""
	I0927 01:43:32.113009   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.113020   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:32.113031   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:32.113046   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:32.168192   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:32.168227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:32.182702   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:32.182727   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:32.255797   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:32.255824   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:32.255835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:32.336083   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:32.336115   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.022764   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.520495   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.308851   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.807870   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.041600   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:34.880981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:34.894904   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:34.894976   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:34.933459   69333 cri.go:89] found id: ""
	I0927 01:43:34.933482   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.933490   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:34.933498   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:34.933555   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:34.966893   69333 cri.go:89] found id: ""
	I0927 01:43:34.966917   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.966926   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:34.966933   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:34.966992   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:35.002878   69333 cri.go:89] found id: ""
	I0927 01:43:35.002899   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.002907   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:35.002912   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:35.002970   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:35.039871   69333 cri.go:89] found id: ""
	I0927 01:43:35.039898   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.039908   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:35.039915   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:35.039977   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:35.078229   69333 cri.go:89] found id: ""
	I0927 01:43:35.078255   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.078267   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:35.078274   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:35.078342   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:35.114369   69333 cri.go:89] found id: ""
	I0927 01:43:35.114397   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.114408   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:35.114415   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:35.114475   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:35.148072   69333 cri.go:89] found id: ""
	I0927 01:43:35.148100   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.148110   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:35.148117   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:35.148188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:35.184020   69333 cri.go:89] found id: ""
	I0927 01:43:35.184051   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.184062   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:35.184073   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:35.184086   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:35.197332   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:35.197355   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:35.273860   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:35.273889   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:35.273904   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:35.354647   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:35.354680   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:35.392622   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:35.392651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:33.521889   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:36.020067   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.021354   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.808365   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.307251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.541793   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.043418   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.943024   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:37.957265   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:37.957329   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:37.991294   69333 cri.go:89] found id: ""
	I0927 01:43:37.991348   69333 logs.go:276] 0 containers: []
	W0927 01:43:37.991362   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:37.991368   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:37.991421   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:38.026960   69333 cri.go:89] found id: ""
	I0927 01:43:38.026981   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.026990   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:38.026998   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:38.027057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:38.063540   69333 cri.go:89] found id: ""
	I0927 01:43:38.063563   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.063571   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:38.063576   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:38.063627   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:38.099554   69333 cri.go:89] found id: ""
	I0927 01:43:38.099602   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.099613   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:38.099621   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:38.099689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:38.136576   69333 cri.go:89] found id: ""
	I0927 01:43:38.136604   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.136615   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:38.136623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:38.136676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:38.170411   69333 cri.go:89] found id: ""
	I0927 01:43:38.170441   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.170452   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:38.170458   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:38.170512   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:38.211902   69333 cri.go:89] found id: ""
	I0927 01:43:38.211934   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.211945   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:38.211951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:38.212007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:38.247850   69333 cri.go:89] found id: ""
	I0927 01:43:38.247875   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.247885   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:38.247895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:38.247913   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:38.329353   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:38.329384   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:38.369114   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:38.369148   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:38.420578   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:38.420613   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:38.434019   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:38.434050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:38.517921   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.018609   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:41.032308   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:41.032370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:41.068491   69333 cri.go:89] found id: ""
	I0927 01:43:41.068518   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.068529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:41.068536   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:41.068597   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:41.106527   69333 cri.go:89] found id: ""
	I0927 01:43:41.106555   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.106565   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:41.106571   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:41.106634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:41.142846   69333 cri.go:89] found id: ""
	I0927 01:43:41.142870   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.142880   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:41.142887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:41.142949   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:41.187499   69333 cri.go:89] found id: ""
	I0927 01:43:41.187525   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.187536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:41.187544   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:41.187606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:41.226040   69333 cri.go:89] found id: ""
	I0927 01:43:41.226063   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.226070   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:41.226076   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:41.226153   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:41.261399   69333 cri.go:89] found id: ""
	I0927 01:43:41.261429   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.261440   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:41.261446   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:41.261493   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:41.300709   69333 cri.go:89] found id: ""
	I0927 01:43:41.300730   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.300737   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:41.300743   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:41.300799   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:41.335725   69333 cri.go:89] found id: ""
	I0927 01:43:41.335751   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.335759   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:41.335767   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:41.335776   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:41.387756   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:41.387788   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:41.401717   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:41.401743   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:41.479524   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.479548   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:41.479562   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:41.559926   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:41.559959   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:40.520642   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.521344   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.307769   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.807328   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.541384   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.548925   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.107615   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:44.122628   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:44.122690   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:44.163496   69333 cri.go:89] found id: ""
	I0927 01:43:44.163521   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.163529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:44.163541   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:44.163588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:44.203488   69333 cri.go:89] found id: ""
	I0927 01:43:44.203519   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.203529   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:44.203535   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:44.203600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:44.238111   69333 cri.go:89] found id: ""
	I0927 01:43:44.238141   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.238148   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:44.238154   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:44.238221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:44.272954   69333 cri.go:89] found id: ""
	I0927 01:43:44.272981   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.272991   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:44.272998   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:44.273057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:44.309700   69333 cri.go:89] found id: ""
	I0927 01:43:44.309719   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.309726   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:44.309731   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:44.309776   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:44.344532   69333 cri.go:89] found id: ""
	I0927 01:43:44.344563   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.344573   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:44.344580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:44.344641   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:44.379354   69333 cri.go:89] found id: ""
	I0927 01:43:44.379380   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.379391   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:44.379399   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:44.379461   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:44.415297   69333 cri.go:89] found id: ""
	I0927 01:43:44.415344   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.415356   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:44.415366   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:44.415381   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:44.468570   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:44.468602   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:44.483419   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:44.483453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:44.560718   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:44.560737   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:44.560753   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:44.641130   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:44.641173   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.188520   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:47.202189   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:47.202262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:47.243051   69333 cri.go:89] found id: ""
	I0927 01:43:47.243075   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.243083   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:47.243089   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:47.243155   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:47.280071   69333 cri.go:89] found id: ""
	I0927 01:43:47.280094   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.280104   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:47.280111   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:47.280170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:47.318458   69333 cri.go:89] found id: ""
	I0927 01:43:47.318482   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.318492   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:47.318499   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:47.318551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:45.023799   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.522945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:45.307910   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.309781   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.807329   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.041371   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.042307   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.352891   69333 cri.go:89] found id: ""
	I0927 01:43:47.352916   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.352926   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:47.352933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:47.352997   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:47.387534   69333 cri.go:89] found id: ""
	I0927 01:43:47.387560   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.387569   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:47.387578   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:47.387646   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:47.422221   69333 cri.go:89] found id: ""
	I0927 01:43:47.422254   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.422265   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:47.422273   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:47.422330   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:47.459624   69333 cri.go:89] found id: ""
	I0927 01:43:47.459645   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.459653   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:47.459659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:47.459706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:47.494322   69333 cri.go:89] found id: ""
	I0927 01:43:47.494347   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.494355   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:47.494363   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:47.494375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:47.508031   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:47.508056   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:47.583920   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:47.583952   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:47.583968   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:47.665533   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:47.665568   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.708423   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:47.708455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.261602   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:50.275548   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:50.275607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:50.311583   69333 cri.go:89] found id: ""
	I0927 01:43:50.311610   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.311620   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:50.311627   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:50.311687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:50.347686   69333 cri.go:89] found id: ""
	I0927 01:43:50.347709   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.347721   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:50.347729   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:50.347778   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:50.386627   69333 cri.go:89] found id: ""
	I0927 01:43:50.386654   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.386663   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:50.386669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:50.386719   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:50.421512   69333 cri.go:89] found id: ""
	I0927 01:43:50.421538   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.421547   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:50.421552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:50.421603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:50.461849   69333 cri.go:89] found id: ""
	I0927 01:43:50.461872   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.461880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:50.461885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:50.461941   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:50.496517   69333 cri.go:89] found id: ""
	I0927 01:43:50.496540   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.496548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:50.496554   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:50.496600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:50.532595   69333 cri.go:89] found id: ""
	I0927 01:43:50.532619   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.532630   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:50.532638   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:50.532687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:50.573213   69333 cri.go:89] found id: ""
	I0927 01:43:50.573241   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.573252   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:50.573262   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:50.573275   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.625600   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:50.625633   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:50.639512   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:50.639535   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:50.708393   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:50.708415   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:50.708436   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:50.789812   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:50.789845   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:50.020837   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:52.021314   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.807713   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:54.308918   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.541348   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.542994   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.335858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:53.349369   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:53.349441   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:53.386922   69333 cri.go:89] found id: ""
	I0927 01:43:53.386947   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.386955   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:53.386961   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:53.387007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:53.423614   69333 cri.go:89] found id: ""
	I0927 01:43:53.423640   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.423651   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:53.423658   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:53.423721   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:53.463245   69333 cri.go:89] found id: ""
	I0927 01:43:53.463265   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.463273   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:53.463280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:53.463344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:53.502093   69333 cri.go:89] found id: ""
	I0927 01:43:53.502123   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.502133   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:53.502140   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:53.502196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:53.538616   69333 cri.go:89] found id: ""
	I0927 01:43:53.538641   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.538652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:53.538659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:53.538716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:53.578580   69333 cri.go:89] found id: ""
	I0927 01:43:53.578609   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.578617   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:53.578623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:53.578685   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:53.615240   69333 cri.go:89] found id: ""
	I0927 01:43:53.615266   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.615275   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:53.615282   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:53.615356   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:53.650987   69333 cri.go:89] found id: ""
	I0927 01:43:53.651011   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.651019   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:53.651028   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:53.651038   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:53.664817   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:53.664841   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:53.737875   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:53.737894   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:53.737909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:53.827293   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:53.827345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:53.867157   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:53.867188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:56.423435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:56.437837   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:56.437912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:56.480328   69333 cri.go:89] found id: ""
	I0927 01:43:56.480349   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.480357   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:56.480364   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:56.480427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:56.520627   69333 cri.go:89] found id: ""
	I0927 01:43:56.520651   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.520660   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:56.520667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:56.520726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:56.561527   69333 cri.go:89] found id: ""
	I0927 01:43:56.561555   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.561567   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:56.561574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:56.561634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:56.598751   69333 cri.go:89] found id: ""
	I0927 01:43:56.598783   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.598794   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:56.598801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:56.598861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:56.634378   69333 cri.go:89] found id: ""
	I0927 01:43:56.634410   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.634422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:56.634429   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:56.634489   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:56.669819   69333 cri.go:89] found id: ""
	I0927 01:43:56.669852   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.669863   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:56.669877   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:56.669929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:56.703715   69333 cri.go:89] found id: ""
	I0927 01:43:56.703740   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.703750   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:56.703757   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:56.703820   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:56.737208   69333 cri.go:89] found id: ""
	I0927 01:43:56.737234   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.737245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:56.737255   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:56.737269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:56.749933   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:56.749960   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:56.822331   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:56.822353   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:56.822369   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:56.904415   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:56.904454   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:56.947108   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:56.947136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:54.521004   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.807935   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.808046   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.041831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.042496   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:00.542924   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:59.500580   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:59.523807   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:59.523888   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:59.562931   69333 cri.go:89] found id: ""
	I0927 01:43:59.562955   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.562963   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:59.562968   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:59.563013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:59.599321   69333 cri.go:89] found id: ""
	I0927 01:43:59.599348   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.599358   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:59.599363   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:59.599418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:59.634404   69333 cri.go:89] found id: ""
	I0927 01:43:59.634431   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.634441   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:59.634448   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:59.634498   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:59.672022   69333 cri.go:89] found id: ""
	I0927 01:43:59.672052   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.672066   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:59.672074   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:59.672134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:59.704617   69333 cri.go:89] found id: ""
	I0927 01:43:59.704647   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.704657   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:59.704664   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:59.704712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:59.740479   69333 cri.go:89] found id: ""
	I0927 01:43:59.740504   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.740512   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:59.740517   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:59.740579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:59.777123   69333 cri.go:89] found id: ""
	I0927 01:43:59.777155   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.777166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:59.777174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:59.777234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:59.817780   69333 cri.go:89] found id: ""
	I0927 01:43:59.817803   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.817825   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:59.817841   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:59.817856   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:59.831252   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:59.831278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:59.901912   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:59.901936   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:59.901949   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:59.983001   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:59.983034   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:00.030989   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:00.031020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:59.020139   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.020925   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.306853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.308075   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.042494   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.043814   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:02.583949   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:02.596723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:02.596798   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:02.630927   69333 cri.go:89] found id: ""
	I0927 01:44:02.630953   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.630962   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:02.630967   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:02.631012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:02.664156   69333 cri.go:89] found id: ""
	I0927 01:44:02.664186   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.664198   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:02.664205   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:02.664259   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:02.698823   69333 cri.go:89] found id: ""
	I0927 01:44:02.698847   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.698860   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:02.698865   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:02.698913   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:02.736114   69333 cri.go:89] found id: ""
	I0927 01:44:02.736142   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.736154   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:02.736161   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:02.736221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:02.769739   69333 cri.go:89] found id: ""
	I0927 01:44:02.769763   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.769771   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:02.769785   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:02.769844   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:02.804798   69333 cri.go:89] found id: ""
	I0927 01:44:02.804871   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.804887   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:02.804898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:02.804958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:02.841197   69333 cri.go:89] found id: ""
	I0927 01:44:02.841224   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.841236   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:02.841243   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:02.841301   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:02.881278   69333 cri.go:89] found id: ""
	I0927 01:44:02.881310   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.881321   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:02.881331   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:02.881345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:02.935149   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:02.935183   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:02.950245   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:02.950273   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:03.020241   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:03.020263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:03.020277   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:03.104467   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:03.104503   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:05.643070   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:05.656656   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:05.656716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:05.694022   69333 cri.go:89] found id: ""
	I0927 01:44:05.694045   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.694053   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:05.694059   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:05.694123   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:05.728575   69333 cri.go:89] found id: ""
	I0927 01:44:05.728600   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.728607   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:05.728613   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:05.728667   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:05.768546   69333 cri.go:89] found id: ""
	I0927 01:44:05.768572   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.768583   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:05.768590   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:05.768652   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:05.809504   69333 cri.go:89] found id: ""
	I0927 01:44:05.809527   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.809536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:05.809543   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:05.809600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:05.846387   69333 cri.go:89] found id: ""
	I0927 01:44:05.846415   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.846422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:05.846428   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:05.846479   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:05.879579   69333 cri.go:89] found id: ""
	I0927 01:44:05.879608   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.879619   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:05.879626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:05.879684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:05.928932   69333 cri.go:89] found id: ""
	I0927 01:44:05.928961   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.928970   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:05.928978   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:05.929037   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:05.986463   69333 cri.go:89] found id: ""
	I0927 01:44:05.986490   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.986499   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:05.986507   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:05.986521   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:06.039984   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:06.040011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:06.053025   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:06.053051   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:06.127277   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:06.127316   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:06.127330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:06.201473   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:06.201504   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:03.520539   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:06.021584   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.808474   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.307407   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:07.542959   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.042223   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.739339   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:08.753354   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:08.753418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:08.788513   69333 cri.go:89] found id: ""
	I0927 01:44:08.788544   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.788556   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:08.788563   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:08.788648   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:08.824615   69333 cri.go:89] found id: ""
	I0927 01:44:08.824642   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.824653   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:08.824661   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:08.824724   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:08.858327   69333 cri.go:89] found id: ""
	I0927 01:44:08.858354   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.858365   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:08.858372   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:08.858430   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:08.896140   69333 cri.go:89] found id: ""
	I0927 01:44:08.896168   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.896175   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:08.896181   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:08.896229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:08.931525   69333 cri.go:89] found id: ""
	I0927 01:44:08.931547   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.931554   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:08.931560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:08.931618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:08.970224   69333 cri.go:89] found id: ""
	I0927 01:44:08.970246   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.970256   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:08.970263   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:08.970331   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:09.007213   69333 cri.go:89] found id: ""
	I0927 01:44:09.007240   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.007248   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:09.007255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:09.007334   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:09.043078   69333 cri.go:89] found id: ""
	I0927 01:44:09.043111   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.043122   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:09.043132   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:09.043147   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:09.096768   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:09.096801   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:09.110721   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:09.110747   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:09.182966   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:09.182990   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:09.183004   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:09.259497   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:09.259541   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:11.797307   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:11.812141   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:11.812196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:11.846429   69333 cri.go:89] found id: ""
	I0927 01:44:11.846468   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.846482   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:11.846489   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:11.846598   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:11.885294   69333 cri.go:89] found id: ""
	I0927 01:44:11.885322   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.885333   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:11.885339   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:11.885398   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:11.920856   69333 cri.go:89] found id: ""
	I0927 01:44:11.920884   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.920892   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:11.920898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:11.920946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:11.964540   69333 cri.go:89] found id: ""
	I0927 01:44:11.964564   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.964574   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:11.964581   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:11.964634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:12.000596   69333 cri.go:89] found id: ""
	I0927 01:44:12.000619   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.000629   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:12.000636   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:12.000697   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:12.037773   69333 cri.go:89] found id: ""
	I0927 01:44:12.037808   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.037819   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:12.037831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:12.037893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:12.074646   69333 cri.go:89] found id: ""
	I0927 01:44:12.074676   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.074687   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:12.074692   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:12.074740   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:12.111771   69333 cri.go:89] found id: ""
	I0927 01:44:12.111802   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.111813   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:12.111824   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:12.111837   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:12.160938   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:12.160971   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:12.175576   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:12.175605   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:12.245227   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:12.245263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:12.245278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:12.325161   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:12.325194   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:08.520111   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.520326   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.520755   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.808039   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.808843   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.042905   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.542272   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.867795   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:14.881053   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:14.881130   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:14.915193   69333 cri.go:89] found id: ""
	I0927 01:44:14.915224   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.915234   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:14.915241   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:14.915318   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:14.951758   69333 cri.go:89] found id: ""
	I0927 01:44:14.951789   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.951801   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:14.951808   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:14.951860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:14.987875   69333 cri.go:89] found id: ""
	I0927 01:44:14.987906   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.987917   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:14.987924   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:14.987985   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:15.025780   69333 cri.go:89] found id: ""
	I0927 01:44:15.025810   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.025820   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:15.025828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:15.025884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:15.062135   69333 cri.go:89] found id: ""
	I0927 01:44:15.062157   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.062165   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:15.062172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:15.062225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:15.097090   69333 cri.go:89] found id: ""
	I0927 01:44:15.097112   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.097119   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:15.097126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:15.097170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:15.130528   69333 cri.go:89] found id: ""
	I0927 01:44:15.130552   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.130561   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:15.130569   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:15.130615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:15.165422   69333 cri.go:89] found id: ""
	I0927 01:44:15.165450   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.165457   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:15.165465   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:15.165474   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:15.214612   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:15.214651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:15.230294   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:15.230318   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:15.303339   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:15.303362   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:15.303375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:15.382046   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:15.382081   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:14.520833   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.021034   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:15.308397   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.808221   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:16.542334   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:18.543785   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.923331   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:17.937693   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:17.937765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:17.972677   69333 cri.go:89] found id: ""
	I0927 01:44:17.972699   69333 logs.go:276] 0 containers: []
	W0927 01:44:17.972707   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:17.972714   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:17.972764   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:18.004818   69333 cri.go:89] found id: ""
	I0927 01:44:18.004846   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.004854   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:18.004860   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:18.004907   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:18.044693   69333 cri.go:89] found id: ""
	I0927 01:44:18.044716   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.044723   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:18.044728   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:18.044772   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:18.079205   69333 cri.go:89] found id: ""
	I0927 01:44:18.079235   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.079244   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:18.079249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:18.079299   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:18.115272   69333 cri.go:89] found id: ""
	I0927 01:44:18.115322   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.115335   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:18.115343   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:18.115412   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:18.150165   69333 cri.go:89] found id: ""
	I0927 01:44:18.150195   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.150206   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:18.150213   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:18.150275   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:18.184971   69333 cri.go:89] found id: ""
	I0927 01:44:18.184999   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.185009   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:18.185016   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:18.185083   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:18.219955   69333 cri.go:89] found id: ""
	I0927 01:44:18.219985   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.219997   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:18.220008   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:18.220020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:18.269713   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:18.269748   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:18.285224   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:18.285251   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:18.364887   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:18.364912   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:18.364927   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:18.450667   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:18.450706   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:20.991648   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:21.006472   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:21.006529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:21.043455   69333 cri.go:89] found id: ""
	I0927 01:44:21.043476   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.043486   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:21.043493   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:21.043549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:21.080365   69333 cri.go:89] found id: ""
	I0927 01:44:21.080391   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.080399   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:21.080405   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:21.080449   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:21.117576   69333 cri.go:89] found id: ""
	I0927 01:44:21.117624   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.117636   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:21.117642   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:21.117703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:21.154538   69333 cri.go:89] found id: ""
	I0927 01:44:21.154564   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.154576   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:21.154584   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:21.154638   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:21.190046   69333 cri.go:89] found id: ""
	I0927 01:44:21.190070   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.190080   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:21.190086   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:21.190147   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:21.226383   69333 cri.go:89] found id: ""
	I0927 01:44:21.226407   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.226417   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:21.226424   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:21.226485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:21.262090   69333 cri.go:89] found id: ""
	I0927 01:44:21.262113   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.262124   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:21.262132   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:21.262188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:21.297675   69333 cri.go:89] found id: ""
	I0927 01:44:21.297697   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.297706   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:21.297716   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:21.297728   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:21.349668   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:21.349705   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:21.364608   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:21.364635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:21.432570   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:21.432596   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:21.432612   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:21.507616   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:21.507661   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:19.520792   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.521341   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:20.307600   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:22.308557   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.807578   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.041736   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:23.041809   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:25.540974   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.054212   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:24.067954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:24.068014   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:24.107017   69333 cri.go:89] found id: ""
	I0927 01:44:24.107045   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.107056   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:24.107063   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:24.107124   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:24.144373   69333 cri.go:89] found id: ""
	I0927 01:44:24.144398   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.144406   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:24.144411   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:24.144473   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:24.180010   69333 cri.go:89] found id: ""
	I0927 01:44:24.180038   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.180048   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:24.180056   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:24.180118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:24.214387   69333 cri.go:89] found id: ""
	I0927 01:44:24.214413   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.214421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:24.214426   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:24.214472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:24.252597   69333 cri.go:89] found id: ""
	I0927 01:44:24.252623   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.252631   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:24.252643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:24.252705   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.292044   69333 cri.go:89] found id: ""
	I0927 01:44:24.292072   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.292082   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:24.292089   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:24.292158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:24.329899   69333 cri.go:89] found id: ""
	I0927 01:44:24.329924   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.329934   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:24.329940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:24.329998   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:24.367964   69333 cri.go:89] found id: ""
	I0927 01:44:24.367989   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.368000   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:24.368010   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:24.368025   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:24.384151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:24.384184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:24.456916   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:24.456940   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:24.456958   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:24.539362   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:24.539399   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:24.578384   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:24.578411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.132700   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:27.146218   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.146294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.180958   69333 cri.go:89] found id: ""
	I0927 01:44:27.180984   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.180992   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:27.180997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.181043   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.215213   69333 cri.go:89] found id: ""
	I0927 01:44:27.215236   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.215243   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:27.215249   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.215293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.258192   69333 cri.go:89] found id: ""
	I0927 01:44:27.258216   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.258226   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:27.258233   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.258289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.292717   69333 cri.go:89] found id: ""
	I0927 01:44:27.292742   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.292753   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:27.292760   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.292818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.328038   69333 cri.go:89] found id: ""
	I0927 01:44:27.328066   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.328076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:27.328083   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.328152   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.021885   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:26.520726   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.308923   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:29.807825   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.542683   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.042293   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.363513   69333 cri.go:89] found id: ""
	I0927 01:44:27.363539   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.363548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:27.363553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.363610   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.402201   69333 cri.go:89] found id: ""
	I0927 01:44:27.402223   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.402231   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:27.402237   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.402290   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.436952   69333 cri.go:89] found id: ""
	I0927 01:44:27.436979   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.436987   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:27.436995   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.437009   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.487908   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:27.487938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:27.502170   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:27.502199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:27.583909   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:27.583931   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:27.583943   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:27.660248   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:27.660286   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:30.201211   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:30.214276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:30.214350   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:30.252445   69333 cri.go:89] found id: ""
	I0927 01:44:30.252474   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.252484   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:30.252490   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:30.252538   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:30.287574   69333 cri.go:89] found id: ""
	I0927 01:44:30.287603   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.287614   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:30.287621   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:30.287693   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:30.324674   69333 cri.go:89] found id: ""
	I0927 01:44:30.324699   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.324711   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:30.324718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:30.324779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:30.360493   69333 cri.go:89] found id: ""
	I0927 01:44:30.360521   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.360531   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:30.360539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:30.360640   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:30.396219   69333 cri.go:89] found id: ""
	I0927 01:44:30.396252   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.396263   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:30.396270   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:30.396328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:30.431524   69333 cri.go:89] found id: ""
	I0927 01:44:30.431546   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.431558   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:30.431564   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:30.431607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:30.465887   69333 cri.go:89] found id: ""
	I0927 01:44:30.465915   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.465926   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:30.465933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:30.466000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:30.501364   69333 cri.go:89] found id: ""
	I0927 01:44:30.501391   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.501402   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:30.501411   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:30.501425   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:30.556344   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:30.556377   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:30.572619   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:30.572649   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:30.645996   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:30.646020   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:30.646032   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:30.737458   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:30.737531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:28.521312   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.521421   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.020699   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:31.807949   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.809414   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:32.045244   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:34.542035   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.284306   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:33.298164   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:33.298224   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:33.334599   69333 cri.go:89] found id: ""
	I0927 01:44:33.334625   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.334634   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:33.334654   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:33.334718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:33.369006   69333 cri.go:89] found id: ""
	I0927 01:44:33.369034   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.369044   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:33.369051   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:33.369119   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:33.407875   69333 cri.go:89] found id: ""
	I0927 01:44:33.407904   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.407912   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:33.407918   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:33.407974   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:33.441048   69333 cri.go:89] found id: ""
	I0927 01:44:33.441083   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.441094   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:33.441101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:33.441156   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:33.478458   69333 cri.go:89] found id: ""
	I0927 01:44:33.478503   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.478515   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:33.478522   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:33.478586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:33.513756   69333 cri.go:89] found id: ""
	I0927 01:44:33.513784   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.513795   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:33.513802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:33.513862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:33.554351   69333 cri.go:89] found id: ""
	I0927 01:44:33.554392   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.554403   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:33.554410   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:33.554472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:33.588484   69333 cri.go:89] found id: ""
	I0927 01:44:33.588512   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.588533   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:33.588544   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:33.588559   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:33.665735   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:33.665775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:33.704654   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:33.704687   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:33.755444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:33.755475   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:33.770069   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:33.770095   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:33.841531   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.341963   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:36.355219   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:36.355294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:36.395149   69333 cri.go:89] found id: ""
	I0927 01:44:36.395185   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.395196   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:36.395203   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:36.395262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:36.434620   69333 cri.go:89] found id: ""
	I0927 01:44:36.434649   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.434661   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:36.434667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:36.434729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:36.468328   69333 cri.go:89] found id: ""
	I0927 01:44:36.468349   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.468357   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:36.468362   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:36.468427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:36.506386   69333 cri.go:89] found id: ""
	I0927 01:44:36.506413   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.506421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:36.506427   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:36.506482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:36.546583   69333 cri.go:89] found id: ""
	I0927 01:44:36.546607   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.546614   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:36.546620   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:36.546665   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:36.581694   69333 cri.go:89] found id: ""
	I0927 01:44:36.581721   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.581730   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:36.581737   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:36.581782   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:36.617775   69333 cri.go:89] found id: ""
	I0927 01:44:36.617799   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.617807   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:36.617813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:36.617877   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:36.654443   69333 cri.go:89] found id: ""
	I0927 01:44:36.654470   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.654478   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:36.654486   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:36.654496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:36.705787   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:36.705817   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:36.720643   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:36.720677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:36.800037   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.800061   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:36.800091   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:36.886845   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:36.886884   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:35.023634   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.520794   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:36.307516   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:38.307899   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.041620   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.044257   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.429349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:39.442899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:39.442973   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:39.481752   69333 cri.go:89] found id: ""
	I0927 01:44:39.481782   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.481793   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:39.481799   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:39.481858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:39.516074   69333 cri.go:89] found id: ""
	I0927 01:44:39.516103   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.516114   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:39.516130   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:39.516188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.563351   69333 cri.go:89] found id: ""
	I0927 01:44:39.563375   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.563386   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:39.563392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.563455   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.601417   69333 cri.go:89] found id: ""
	I0927 01:44:39.601445   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.601455   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:39.601469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.601529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.634537   69333 cri.go:89] found id: ""
	I0927 01:44:39.634565   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.634576   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:39.634582   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.634642   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.668910   69333 cri.go:89] found id: ""
	I0927 01:44:39.668937   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.668948   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:39.668955   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.669013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.701992   69333 cri.go:89] found id: ""
	I0927 01:44:39.702014   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.702021   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:39.702027   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.702074   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.741579   69333 cri.go:89] found id: ""
	I0927 01:44:39.741601   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.741610   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:39.741618   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:39.741627   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:39.806476   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.806510   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.820228   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:39.820255   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:39.893137   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:39.893167   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.893181   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.974477   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.974514   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:40.021226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.521217   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:40.309154   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.808724   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:41.542308   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:44.042015   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.517449   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:42.532200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:42.532266   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:42.568872   69333 cri.go:89] found id: ""
	I0927 01:44:42.568901   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.568911   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:42.568919   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:42.568980   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:42.605069   69333 cri.go:89] found id: ""
	I0927 01:44:42.605220   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.605251   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:42.605261   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:42.605335   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:42.641637   69333 cri.go:89] found id: ""
	I0927 01:44:42.641665   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.641673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:42.641680   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:42.641742   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:42.677333   69333 cri.go:89] found id: ""
	I0927 01:44:42.677361   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.677376   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:42.677382   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:42.677439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:42.712456   69333 cri.go:89] found id: ""
	I0927 01:44:42.712484   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.712495   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:42.712501   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:42.712565   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:42.745109   69333 cri.go:89] found id: ""
	I0927 01:44:42.745140   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.745150   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:42.745157   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:42.745226   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:42.779427   69333 cri.go:89] found id: ""
	I0927 01:44:42.779449   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.779457   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:42.779462   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:42.779508   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:42.823920   69333 cri.go:89] found id: ""
	I0927 01:44:42.823946   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.823954   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:42.823963   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:42.823972   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:42.881345   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:42.881380   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:42.896076   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:42.896100   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:42.971775   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:42.971796   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:42.971809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:43.054461   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:43.054494   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.596681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:45.610817   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:45.610882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:45.647628   69333 cri.go:89] found id: ""
	I0927 01:44:45.647654   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.647662   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:45.647668   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:45.647715   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:45.685480   69333 cri.go:89] found id: ""
	I0927 01:44:45.685507   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.685514   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:45.685520   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:45.685573   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:45.721601   69333 cri.go:89] found id: ""
	I0927 01:44:45.721624   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.721632   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:45.721637   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:45.721700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:45.756763   69333 cri.go:89] found id: ""
	I0927 01:44:45.756788   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.756796   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:45.756802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:45.756858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:45.792891   69333 cri.go:89] found id: ""
	I0927 01:44:45.792917   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.792927   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:45.792934   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:45.792996   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:45.828716   69333 cri.go:89] found id: ""
	I0927 01:44:45.828739   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.828747   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:45.828753   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:45.828807   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:45.868813   69333 cri.go:89] found id: ""
	I0927 01:44:45.868840   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.868848   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:45.868853   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:45.868905   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:45.907281   69333 cri.go:89] found id: ""
	I0927 01:44:45.907327   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.907341   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:45.907352   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:45.907371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:45.958539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:45.958574   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:45.972540   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:45.972567   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:46.046083   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:46.046124   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:46.046141   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:46.124313   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:46.124349   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.021100   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.021435   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:45.307916   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.807187   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:49.809212   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:46.042143   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.541984   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:50.542678   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.673701   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:48.687673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:48.687744   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:48.722269   69333 cri.go:89] found id: ""
	I0927 01:44:48.722291   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.722302   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:48.722308   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:48.722370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:48.758297   69333 cri.go:89] found id: ""
	I0927 01:44:48.758318   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.758326   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:48.758331   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:48.758377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:48.792706   69333 cri.go:89] found id: ""
	I0927 01:44:48.792730   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.792738   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:48.792744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:48.792792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:48.827015   69333 cri.go:89] found id: ""
	I0927 01:44:48.827035   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.827047   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:48.827052   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:48.827095   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:48.862538   69333 cri.go:89] found id: ""
	I0927 01:44:48.862564   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.862572   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:48.862577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:48.862632   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:48.896118   69333 cri.go:89] found id: ""
	I0927 01:44:48.896144   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.896154   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:48.896166   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:48.896225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:48.932483   69333 cri.go:89] found id: ""
	I0927 01:44:48.932511   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.932519   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:48.932524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:48.932576   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:48.971864   69333 cri.go:89] found id: ""
	I0927 01:44:48.971890   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.971898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:48.971906   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:48.971919   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:49.028163   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:49.028199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:49.042780   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:49.042805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:49.116454   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:49.116476   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:49.116491   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.196048   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:49.196084   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:51.735108   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:51.749191   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:51.749258   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:51.784776   69333 cri.go:89] found id: ""
	I0927 01:44:51.784804   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.784815   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:51.784823   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:51.784880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:51.822807   69333 cri.go:89] found id: ""
	I0927 01:44:51.822836   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.822847   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:51.822854   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:51.822912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:51.858700   69333 cri.go:89] found id: ""
	I0927 01:44:51.858726   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.858737   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:51.858744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:51.858812   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:51.894945   69333 cri.go:89] found id: ""
	I0927 01:44:51.894968   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.894975   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:51.894980   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:51.895025   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:51.939475   69333 cri.go:89] found id: ""
	I0927 01:44:51.939503   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.939518   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:51.939524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:51.939569   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:51.982626   69333 cri.go:89] found id: ""
	I0927 01:44:51.982654   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.982665   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:51.982673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:51.982731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:52.050446   69333 cri.go:89] found id: ""
	I0927 01:44:52.050473   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.050483   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:52.050490   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:52.050549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:52.092637   69333 cri.go:89] found id: ""
	I0927 01:44:52.092666   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.092676   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:52.092686   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:52.092700   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:52.132135   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:52.132165   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:52.186537   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:52.186572   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:52.200001   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:52.200027   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:52.282068   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:52.282093   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:52.282108   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.021229   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.308560   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.309001   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:53.042624   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:55.043212   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.866565   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:54.880400   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:54.880460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:54.918963   69333 cri.go:89] found id: ""
	I0927 01:44:54.919004   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.919027   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:54.919036   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:54.919107   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:54.959918   69333 cri.go:89] found id: ""
	I0927 01:44:54.959947   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.959958   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:54.959965   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:54.960026   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:55.004348   69333 cri.go:89] found id: ""
	I0927 01:44:55.004370   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.004378   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:55.004392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:55.004446   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:55.045190   69333 cri.go:89] found id: ""
	I0927 01:44:55.045213   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.045220   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:55.045225   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:55.045278   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:55.087638   69333 cri.go:89] found id: ""
	I0927 01:44:55.087663   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.087671   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:55.087677   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:55.087739   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:55.126899   69333 cri.go:89] found id: ""
	I0927 01:44:55.126932   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.126943   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:55.126951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:55.127012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:55.167593   69333 cri.go:89] found id: ""
	I0927 01:44:55.167624   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.167635   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:55.167643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:55.167706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:55.208362   69333 cri.go:89] found id: ""
	I0927 01:44:55.208388   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.208399   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:55.208409   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:55.208424   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:55.247198   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:55.247221   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:55.299408   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:55.299443   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:55.315745   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:55.315775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:55.387499   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:55.387523   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:55.387539   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:54.021502   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.520627   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.807487   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:58.807902   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.541517   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:59.542233   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.968863   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:57.987921   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:57.987988   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:58.036770   69333 cri.go:89] found id: ""
	I0927 01:44:58.036802   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.036813   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:58.036824   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:58.036878   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:58.072461   69333 cri.go:89] found id: ""
	I0927 01:44:58.072484   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.072492   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:58.072499   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:58.072551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:58.107247   69333 cri.go:89] found id: ""
	I0927 01:44:58.107273   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.107284   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:58.107290   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:58.107365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:58.149050   69333 cri.go:89] found id: ""
	I0927 01:44:58.149080   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.149091   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:58.149099   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:58.149162   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:58.188167   69333 cri.go:89] found id: ""
	I0927 01:44:58.188198   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.188209   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:58.188217   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:58.188283   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:58.224291   69333 cri.go:89] found id: ""
	I0927 01:44:58.224319   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.224329   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:58.224337   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:58.224401   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:58.258786   69333 cri.go:89] found id: ""
	I0927 01:44:58.258813   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.258822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:58.258828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:58.258885   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:58.298310   69333 cri.go:89] found id: ""
	I0927 01:44:58.298338   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.298349   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:58.298359   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:58.298373   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:58.340299   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:58.340330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.395097   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:58.395130   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:58.410653   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:58.410677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:58.479437   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:58.479459   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:58.479470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.057473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:01.071746   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:01.071818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:01.112652   69333 cri.go:89] found id: ""
	I0927 01:45:01.112676   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.112684   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:01.112690   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:01.112735   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:01.146071   69333 cri.go:89] found id: ""
	I0927 01:45:01.146100   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.146111   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:01.146119   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:01.146188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:01.188640   69333 cri.go:89] found id: ""
	I0927 01:45:01.188663   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.188673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:01.188679   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:01.188743   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:01.225024   69333 cri.go:89] found id: ""
	I0927 01:45:01.225050   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.225060   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:01.225067   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:01.225128   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:01.262459   69333 cri.go:89] found id: ""
	I0927 01:45:01.262487   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.262498   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:01.262505   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:01.262560   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:01.298567   69333 cri.go:89] found id: ""
	I0927 01:45:01.298588   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.298597   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:01.298603   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:01.298647   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:01.335051   69333 cri.go:89] found id: ""
	I0927 01:45:01.335084   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.335094   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:01.335100   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:01.335149   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:01.371187   69333 cri.go:89] found id: ""
	I0927 01:45:01.371217   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.371227   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:01.371237   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:01.371252   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:01.385163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:01.385189   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:01.457256   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:01.457298   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:01.457313   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.537788   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:01.537819   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:01.580645   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:01.580672   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.521367   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.020826   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.021213   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:00.808021   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.307242   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.542831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.042010   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.131877   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:04.145175   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:04.145248   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:04.179508   69333 cri.go:89] found id: ""
	I0927 01:45:04.179535   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.179545   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:04.179552   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:04.179612   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:04.213497   69333 cri.go:89] found id: ""
	I0927 01:45:04.213533   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.213544   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:04.213551   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:04.213606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:04.249708   69333 cri.go:89] found id: ""
	I0927 01:45:04.249737   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.249747   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:04.249754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:04.249824   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:04.288283   69333 cri.go:89] found id: ""
	I0927 01:45:04.288306   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.288314   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:04.288319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:04.288368   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:04.325515   69333 cri.go:89] found id: ""
	I0927 01:45:04.325539   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.325549   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:04.325560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:04.325618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:04.363485   69333 cri.go:89] found id: ""
	I0927 01:45:04.363511   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.363521   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:04.363528   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:04.363586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:04.398834   69333 cri.go:89] found id: ""
	I0927 01:45:04.398863   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.398875   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:04.398882   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:04.398948   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:04.433408   69333 cri.go:89] found id: ""
	I0927 01:45:04.433435   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.433443   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:04.433451   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:04.433461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:04.485354   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:04.485392   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:04.499007   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:04.499031   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:04.569376   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:04.569405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:04.569420   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:04.646614   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:04.646651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.186491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:07.200510   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:07.200575   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:07.239519   69333 cri.go:89] found id: ""
	I0927 01:45:07.239542   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.239553   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:07.239562   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:07.239751   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:07.276820   69333 cri.go:89] found id: ""
	I0927 01:45:07.276854   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.276863   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:07.276870   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:07.276932   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:07.312580   69333 cri.go:89] found id: ""
	I0927 01:45:07.312604   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.312613   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:07.312619   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:07.312676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:05.520930   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.020001   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:05.807739   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.807914   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:06.042390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.542149   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.542438   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.350763   69333 cri.go:89] found id: ""
	I0927 01:45:07.350788   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.350799   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:07.350806   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:07.350861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:07.385347   69333 cri.go:89] found id: ""
	I0927 01:45:07.385376   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.385383   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:07.385389   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:07.385439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:07.420665   69333 cri.go:89] found id: ""
	I0927 01:45:07.420696   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.420708   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:07.420718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:07.420768   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:07.453707   69333 cri.go:89] found id: ""
	I0927 01:45:07.453737   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.453746   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:07.453752   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:07.453806   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:07.489467   69333 cri.go:89] found id: ""
	I0927 01:45:07.489497   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.489508   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:07.489520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:07.489531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:07.569464   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:07.569496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.609123   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:07.609160   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:07.659556   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:07.659590   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:07.673163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:07.673191   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:07.751340   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.252511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:10.266651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:10.266706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:10.304131   69333 cri.go:89] found id: ""
	I0927 01:45:10.304160   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.304171   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:10.304178   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:10.304243   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:10.339267   69333 cri.go:89] found id: ""
	I0927 01:45:10.339295   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.339321   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:10.339329   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:10.339397   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:10.376268   69333 cri.go:89] found id: ""
	I0927 01:45:10.376298   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.376308   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:10.376319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:10.376380   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:10.413944   69333 cri.go:89] found id: ""
	I0927 01:45:10.413970   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.413978   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:10.413984   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:10.414033   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:10.449205   69333 cri.go:89] found id: ""
	I0927 01:45:10.449226   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.449234   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:10.449240   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:10.449289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:10.487927   69333 cri.go:89] found id: ""
	I0927 01:45:10.487947   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.487955   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:10.487961   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:10.488018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:10.525062   69333 cri.go:89] found id: ""
	I0927 01:45:10.525085   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.525095   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:10.525102   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:10.525163   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:10.560718   69333 cri.go:89] found id: ""
	I0927 01:45:10.560768   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.560779   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:10.560790   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:10.560803   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:10.641755   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.641781   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:10.641796   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:10.719775   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:10.719807   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:10.761952   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:10.761978   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:10.815296   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:10.815330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:10.023849   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.520577   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.307967   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.807872   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:14.808602   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:13.041469   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:15.036533   69234 pod_ready.go:82] duration metric: took 4m0.000873058s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	E0927 01:45:15.036568   69234 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:45:15.036588   69234 pod_ready.go:39] duration metric: took 4m6.530278971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:15.036645   69234 kubeadm.go:597] duration metric: took 4m16.375010355s to restartPrimaryControlPlane
	W0927 01:45:15.036713   69234 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:15.036743   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:13.330300   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:13.343840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:13.343893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:13.378904   69333 cri.go:89] found id: ""
	I0927 01:45:13.378933   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.378944   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:13.378952   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:13.379010   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:13.417375   69333 cri.go:89] found id: ""
	I0927 01:45:13.417403   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.417415   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:13.417422   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:13.417482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:13.456265   69333 cri.go:89] found id: ""
	I0927 01:45:13.456291   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.456302   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:13.456310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:13.456358   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:13.502205   69333 cri.go:89] found id: ""
	I0927 01:45:13.502229   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.502240   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:13.502247   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:13.502310   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:13.543617   69333 cri.go:89] found id: ""
	I0927 01:45:13.543642   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.543652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:13.543660   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:13.543723   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:13.580268   69333 cri.go:89] found id: ""
	I0927 01:45:13.580295   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.580305   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:13.580313   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:13.580374   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:13.616681   69333 cri.go:89] found id: ""
	I0927 01:45:13.616705   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.616713   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:13.616718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:13.616765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:13.653389   69333 cri.go:89] found id: ""
	I0927 01:45:13.653412   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.653420   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:13.653430   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:13.653442   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:13.666511   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:13.666534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:13.742282   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:13.742300   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:13.742311   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:13.825800   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:13.825836   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:13.876345   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:13.876376   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.429245   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:16.443286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:16.443366   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:16.481601   69333 cri.go:89] found id: ""
	I0927 01:45:16.481626   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.481637   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:16.481645   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:16.481703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:16.513626   69333 cri.go:89] found id: ""
	I0927 01:45:16.513652   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.513659   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:16.513665   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:16.513710   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:16.552531   69333 cri.go:89] found id: ""
	I0927 01:45:16.552565   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.552574   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:16.552580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:16.552636   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:16.587252   69333 cri.go:89] found id: ""
	I0927 01:45:16.587282   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.587294   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:16.587316   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:16.587377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:16.628376   69333 cri.go:89] found id: ""
	I0927 01:45:16.628401   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.628410   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:16.628417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:16.628482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:16.669603   69333 cri.go:89] found id: ""
	I0927 01:45:16.669639   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.669651   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:16.669658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:16.669731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:16.705581   69333 cri.go:89] found id: ""
	I0927 01:45:16.705607   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.705618   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:16.705626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:16.705682   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:16.740710   69333 cri.go:89] found id: ""
	I0927 01:45:16.740735   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.740743   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:16.740759   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:16.740771   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.791025   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:16.791060   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:16.805990   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:16.806023   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:16.878313   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:16.878331   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:16.878346   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:16.966228   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:16.966269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:14.521852   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:16.522127   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:17.307853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.308018   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.512044   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:19.526801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:19.526862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:19.562063   69333 cri.go:89] found id: ""
	I0927 01:45:19.562089   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.562098   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:19.562104   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:19.562159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:19.598600   69333 cri.go:89] found id: ""
	I0927 01:45:19.598626   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.598634   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:19.598642   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:19.598712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:19.632544   69333 cri.go:89] found id: ""
	I0927 01:45:19.632564   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.632572   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:19.632577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:19.632635   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:19.671676   69333 cri.go:89] found id: ""
	I0927 01:45:19.671703   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.671713   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:19.671721   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:19.671779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:19.710321   69333 cri.go:89] found id: ""
	I0927 01:45:19.710351   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.710362   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:19.710370   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:19.710438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:19.746252   69333 cri.go:89] found id: ""
	I0927 01:45:19.746277   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.746288   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:19.746295   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:19.746354   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:19.783089   69333 cri.go:89] found id: ""
	I0927 01:45:19.783112   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.783121   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:19.783126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:19.783189   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:19.821090   69333 cri.go:89] found id: ""
	I0927 01:45:19.821117   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.821126   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:19.821134   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:19.821145   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:19.873539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:19.873575   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:19.888446   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:19.888471   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:19.958009   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:19.958034   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:19.958050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:20.037552   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:20.037587   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:19.022216   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.520606   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.808178   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:23.808273   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:22.579288   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:22.592789   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:22.592846   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:22.628148   69333 cri.go:89] found id: ""
	I0927 01:45:22.628178   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.628186   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:22.628193   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:22.628240   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:22.664162   69333 cri.go:89] found id: ""
	I0927 01:45:22.664186   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.664194   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:22.664200   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:22.664253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:22.702077   69333 cri.go:89] found id: ""
	I0927 01:45:22.702104   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.702115   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:22.702123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:22.702183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:22.739657   69333 cri.go:89] found id: ""
	I0927 01:45:22.739690   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.739700   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:22.739708   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:22.739773   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:22.774109   69333 cri.go:89] found id: ""
	I0927 01:45:22.774137   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.774148   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:22.774174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:22.774229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:22.809648   69333 cri.go:89] found id: ""
	I0927 01:45:22.809671   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.809678   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:22.809684   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:22.809729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:22.842598   69333 cri.go:89] found id: ""
	I0927 01:45:22.842620   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.842627   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:22.842632   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:22.842677   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:22.877336   69333 cri.go:89] found id: ""
	I0927 01:45:22.877364   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.877374   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:22.877382   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:22.877393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:22.930364   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:22.930395   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:22.944174   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:22.944200   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:23.025495   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:23.025520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:23.025534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:23.101813   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:23.101850   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:25.644577   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:25.657820   69333 kubeadm.go:597] duration metric: took 4m3.277962916s to restartPrimaryControlPlane
	W0927 01:45:25.657898   69333 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:25.657929   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:26.111439   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:26.128279   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:26.138354   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:26.148116   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:26.148132   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:26.148170   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:26.157965   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:26.158012   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:26.168349   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:26.177624   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:26.177692   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:26.187584   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.196800   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:26.196856   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.205894   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:26.215316   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:26.215365   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:26.224989   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:26.299149   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:45:26.299261   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:26.451113   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:26.451282   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:26.451457   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:45:26.637960   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:26.640682   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:26.640782   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:26.640865   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:26.640972   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:26.641099   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:26.641233   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:26.641317   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:26.641425   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:26.641525   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:26.641633   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:26.641901   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:26.642000   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:26.642080   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:26.782585   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:27.008743   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:27.103701   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:27.217999   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:27.238810   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:27.240191   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:27.240240   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:27.375215   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:23.521301   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.020002   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.021215   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.306744   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.308577   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:27.376992   69333 out.go:235]   - Booting up control plane ...
	I0927 01:45:27.377123   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:27.386897   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:27.387959   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:27.388954   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:27.392182   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:45:30.520717   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.019981   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:30.808251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.307139   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.020640   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.520220   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.307871   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.808604   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:41.262067   69234 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225299595s)
	I0927 01:45:41.262142   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:41.294256   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:41.304403   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:41.314288   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:41.314310   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:41.314357   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:41.323280   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:41.323335   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:41.332637   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:41.341492   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:41.341552   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:41.352259   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.361190   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:41.361244   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.370863   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:41.379674   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:41.379735   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:41.389169   69234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:41.434391   69234 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:45:41.434565   69234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:41.537712   69234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:41.537813   69234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:41.537951   69234 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:45:41.546906   69234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:41.548799   69234 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:41.548882   69234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:41.548959   69234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:41.549049   69234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:41.549133   69234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:41.549239   69234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:41.549328   69234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:41.549433   69234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:41.549531   69234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:41.549619   69234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:41.549691   69234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:41.549741   69234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:41.549813   69234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:41.594579   69234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:41.703970   69234 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:45:41.813013   69234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:41.875564   69234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:42.025627   69234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:42.026325   69234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:42.028784   69234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:39.521118   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.020563   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:40.307764   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.307974   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:44.808238   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.030464   69234 out.go:235]   - Booting up control plane ...
	I0927 01:45:42.030566   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:42.030674   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:42.031152   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:42.050207   69234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:42.058709   69234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:42.058766   69234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:42.192498   69234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:45:42.192628   69234 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:45:42.694670   69234 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.189114ms
	I0927 01:45:42.694812   69234 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:45:48.195975   69234 kubeadm.go:310] [api-check] The API server is healthy after 5.501110293s
	I0927 01:45:48.210406   69234 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:45:48.231678   69234 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:45:48.257669   69234 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:45:48.257859   69234 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-245911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:45:48.271429   69234 kubeadm.go:310] [bootstrap-token] Using token: bqds0t.3lt1vhl3zjbrkom6
	I0927 01:45:44.021019   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:46.520158   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:48.272667   69234 out.go:235]   - Configuring RBAC rules ...
	I0927 01:45:48.272775   69234 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:45:48.278773   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:45:48.290868   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:45:48.297879   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:45:48.302011   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:45:48.306217   69234 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:45:48.604161   69234 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:45:49.041505   69234 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:45:49.604127   69234 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:45:49.604867   69234 kubeadm.go:310] 
	I0927 01:45:49.604981   69234 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:45:49.605008   69234 kubeadm.go:310] 
	I0927 01:45:49.605136   69234 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:45:49.605147   69234 kubeadm.go:310] 
	I0927 01:45:49.605188   69234 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:45:49.605266   69234 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:45:49.605363   69234 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:45:49.605373   69234 kubeadm.go:310] 
	I0927 01:45:49.605446   69234 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:45:49.605455   69234 kubeadm.go:310] 
	I0927 01:45:49.605524   69234 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:45:49.605537   69234 kubeadm.go:310] 
	I0927 01:45:49.605612   69234 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:45:49.605725   69234 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:45:49.605826   69234 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:45:49.605836   69234 kubeadm.go:310] 
	I0927 01:45:49.605913   69234 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:45:49.606010   69234 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:45:49.606032   69234 kubeadm.go:310] 
	I0927 01:45:49.606130   69234 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606252   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:45:49.606276   69234 kubeadm.go:310] 	--control-plane 
	I0927 01:45:49.606282   69234 kubeadm.go:310] 
	I0927 01:45:49.606404   69234 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:45:49.606421   69234 kubeadm.go:310] 
	I0927 01:45:49.606546   69234 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606692   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:45:49.607952   69234 kubeadm.go:310] W0927 01:45:41.410128    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608322   69234 kubeadm.go:310] W0927 01:45:41.412009    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608494   69234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:45:49.608518   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:45:49.608527   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:45:49.610175   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:45:47.307006   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.307374   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.611562   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:45:49.622683   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:45:49.642326   69234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:45:49.642366   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:49.642393   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-245911 minikube.k8s.io/updated_at=2024_09_27T01_45_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=embed-certs-245911 minikube.k8s.io/primary=true
	I0927 01:45:49.677602   69234 ops.go:34] apiserver oom_adj: -16
	I0927 01:45:49.854320   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:50.355392   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:48.520718   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.520908   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.020638   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.854364   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.355074   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.855077   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.354509   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.855229   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.355204   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.854829   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:54.066909   69234 kubeadm.go:1113] duration metric: took 4.424595735s to wait for elevateKubeSystemPrivileges
	I0927 01:45:54.066954   69234 kubeadm.go:394] duration metric: took 4m55.454404762s to StartCluster
	I0927 01:45:54.066978   69234 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.067071   69234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:45:54.069732   69234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.070048   69234 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:45:54.070126   69234 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:45:54.070235   69234 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-245911"
	I0927 01:45:54.070257   69234 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-245911"
	I0927 01:45:54.070261   69234 addons.go:69] Setting default-storageclass=true in profile "embed-certs-245911"
	I0927 01:45:54.070270   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:45:54.070270   69234 addons.go:69] Setting metrics-server=true in profile "embed-certs-245911"
	I0927 01:45:54.070286   69234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-245911"
	I0927 01:45:54.070296   69234 addons.go:234] Setting addon metrics-server=true in "embed-certs-245911"
	W0927 01:45:54.070305   69234 addons.go:243] addon metrics-server should already be in state true
	W0927 01:45:54.070266   69234 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070750   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070790   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070753   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070850   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070889   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070936   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.071693   69234 out.go:177] * Verifying Kubernetes components...
	I0927 01:45:54.073034   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:45:54.087559   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38159
	I0927 01:45:54.087567   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0927 01:45:54.088061   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088074   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0927 01:45:54.088183   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088412   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088551   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088573   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088635   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088655   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088852   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088874   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088929   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089023   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.089193   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089585   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089610   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089627   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.089639   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.092683   69234 addons.go:234] Setting addon default-storageclass=true in "embed-certs-245911"
	W0927 01:45:54.092705   69234 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:45:54.092729   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.093065   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.093102   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.106496   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0927 01:45:54.106952   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.107486   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.107513   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.108098   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.108297   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.109993   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.110532   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0927 01:45:54.111066   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.111688   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.111708   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.111909   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0927 01:45:54.112156   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112338   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.112740   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.112751   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.112832   69234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:45:54.112953   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.113345   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.113372   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.114353   69234 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.114372   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:45:54.114392   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.114596   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.116175   69234 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:45:51.806801   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.808476   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:54.117315   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:45:54.117326   69234 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:45:54.117341   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.120242   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.120881   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.120903   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121161   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121224   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121452   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.121658   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.121747   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.121944   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121960   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.121677   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.122386   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.122518   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.122695   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.135920   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0927 01:45:54.136247   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.136682   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.136696   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.136971   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.137163   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.138640   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.138903   69234 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.138919   69234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:45:54.138936   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.141420   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141786   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.141803   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141966   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.142132   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.142235   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.142308   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.325790   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:45:54.375616   69234 node_ready.go:35] waiting up to 6m0s for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386626   69234 node_ready.go:49] node "embed-certs-245911" has status "Ready":"True"
	I0927 01:45:54.386646   69234 node_ready.go:38] duration metric: took 10.995073ms for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386654   69234 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:54.394605   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:45:54.458245   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.501624   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:45:54.501655   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:45:54.508690   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.548168   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:45:54.548194   69234 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:45:54.615565   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:54.615591   69234 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:45:54.655649   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:55.488749   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488849   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.488803   69234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030519069s)
	I0927 01:45:55.488934   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488942   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489266   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489282   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489290   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489298   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489377   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489393   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489401   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489409   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489511   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489528   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489540   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491047   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491082   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.491093   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.535220   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.535240   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.535604   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.535625   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.627642   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.627663   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628020   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.628025   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628047   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628055   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.628062   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628294   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628311   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628322   69234 addons.go:475] Verifying addon metrics-server=true in "embed-certs-245911"
	I0927 01:45:55.629802   69234 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0927 01:45:55.022054   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:57.520749   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:56.307903   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.807972   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:55.631245   69234 addons.go:510] duration metric: took 1.561128577s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0927 01:45:56.401813   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.900688   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:59.521353   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.014813   69534 pod_ready.go:82] duration metric: took 4m0.000584515s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:00.014858   69534 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:46:00.014878   69534 pod_ready.go:39] duration metric: took 4m13.043107791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:00.014903   69534 kubeadm.go:597] duration metric: took 4m20.409702758s to restartPrimaryControlPlane
	W0927 01:46:00.014956   69534 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:46:00.014980   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:46:00.808408   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.808672   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.901714   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.902242   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:03.401910   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.401936   69234 pod_ready.go:82] duration metric: took 9.007296678s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.401948   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908874   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.908896   69234 pod_ready.go:82] duration metric: took 506.941437ms for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908918   69234 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914117   69234 pod_ready.go:93] pod "etcd-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.914135   69234 pod_ready.go:82] duration metric: took 5.210078ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914142   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918778   69234 pod_ready.go:93] pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.918801   69234 pod_ready.go:82] duration metric: took 4.651828ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918812   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.923979   69234 pod_ready.go:93] pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.923996   69234 pod_ready.go:82] duration metric: took 5.176348ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.924004   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199586   69234 pod_ready.go:93] pod "kube-proxy-5l299" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.199612   69234 pod_ready.go:82] duration metric: took 275.601068ms for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199621   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598852   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.598880   69234 pod_ready.go:82] duration metric: took 399.251298ms for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598890   69234 pod_ready.go:39] duration metric: took 10.212226661s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:04.598905   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:04.598962   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:04.615194   69234 api_server.go:72] duration metric: took 10.545103977s to wait for apiserver process to appear ...
	I0927 01:46:04.615225   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:04.615248   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:46:04.621164   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:46:04.622001   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:04.622022   69234 api_server.go:131] duration metric: took 6.789717ms to wait for apiserver health ...
	I0927 01:46:04.622032   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:04.802641   69234 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:04.802674   69234 system_pods.go:61] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:04.802681   69234 system_pods.go:61] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:04.802687   69234 system_pods.go:61] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:04.802693   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:04.802699   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:04.802703   69234 system_pods.go:61] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:04.802708   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:04.802717   69234 system_pods.go:61] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:04.802722   69234 system_pods.go:61] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:04.802735   69234 system_pods.go:74] duration metric: took 180.694209ms to wait for pod list to return data ...
	I0927 01:46:04.802747   69234 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:04.999578   69234 default_sa.go:45] found service account: "default"
	I0927 01:46:04.999603   69234 default_sa.go:55] duration metric: took 196.845725ms for default service account to be created ...
	I0927 01:46:04.999612   69234 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:05.201201   69234 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:05.201228   69234 system_pods.go:89] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:05.201233   69234 system_pods.go:89] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:05.201237   69234 system_pods.go:89] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:05.201241   69234 system_pods.go:89] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:05.201244   69234 system_pods.go:89] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:05.201248   69234 system_pods.go:89] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:05.201251   69234 system_pods.go:89] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:05.201256   69234 system_pods.go:89] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:05.201260   69234 system_pods.go:89] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:05.201268   69234 system_pods.go:126] duration metric: took 201.651734ms to wait for k8s-apps to be running ...
	I0927 01:46:05.201275   69234 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:05.201315   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:05.216216   69234 system_svc.go:56] duration metric: took 14.930697ms WaitForService to wait for kubelet
	I0927 01:46:05.216248   69234 kubeadm.go:582] duration metric: took 11.146166369s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:05.216271   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:05.400667   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:05.400695   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:05.400708   69234 node_conditions.go:105] duration metric: took 184.432904ms to run NodePressure ...
	I0927 01:46:05.400719   69234 start.go:241] waiting for startup goroutines ...
	I0927 01:46:05.400729   69234 start.go:246] waiting for cluster config update ...
	I0927 01:46:05.400743   69234 start.go:255] writing updated cluster config ...
	I0927 01:46:05.401134   69234 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:05.452606   69234 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:05.454631   69234 out.go:177] * Done! kubectl is now configured to use "embed-certs-245911" cluster and "default" namespace by default
	I0927 01:46:05.307371   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.807981   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.393548   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:46:07.394304   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:07.394505   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:10.307311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.308085   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:14.308664   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.395176   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:12.395434   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:16.807116   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:18.807652   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:21.307348   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:23.807597   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:26.304067   69534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.289064717s)
	I0927 01:46:26.304150   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:26.341383   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:46:26.365985   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:46:26.382056   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:46:26.382082   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:46:26.382133   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:46:26.405820   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:46:26.405881   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:46:26.416355   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:46:26.426710   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:46:26.426759   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:46:26.438110   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.448631   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:46:26.448691   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.458453   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:46:26.467677   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:46:26.467724   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:46:26.478333   69534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:46:26.528377   69534 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:46:26.528432   69534 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:46:26.653799   69534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:46:26.653904   69534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:46:26.654029   69534 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:46:26.666791   69534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:46:22.395858   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:22.396073   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:26.668660   69534 out.go:235]   - Generating certificates and keys ...
	I0927 01:46:26.668739   69534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:46:26.668803   69534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:46:26.668918   69534 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:46:26.669012   69534 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:46:26.669103   69534 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:46:26.669178   69534 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:46:26.669308   69534 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:46:26.669628   69534 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:46:26.669868   69534 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:46:26.670086   69534 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:46:26.670284   69534 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:46:26.670395   69534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:46:26.885345   69534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:46:27.061416   69534 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:46:27.347409   69534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:46:27.477340   69534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:46:27.607326   69534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:46:27.607882   69534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:46:27.612459   69534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:46:27.614167   69534 out.go:235]   - Booting up control plane ...
	I0927 01:46:27.614285   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:46:27.614388   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:46:27.614482   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:46:27.635734   69534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:46:27.642550   69534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:46:27.642634   69534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:46:27.778616   69534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:46:27.778763   69534 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:46:28.280057   69534 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.328597ms
	I0927 01:46:28.280185   69534 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:46:25.808311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:28.307033   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:33.781107   69534 kubeadm.go:310] [api-check] The API server is healthy after 5.501552407s
	I0927 01:46:33.796672   69534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:46:33.809900   69534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:46:33.845968   69534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:46:33.846194   69534 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-368295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:46:33.862294   69534 kubeadm.go:310] [bootstrap-token] Using token: qmzafx.lhyo0l65zryygr2x
	I0927 01:46:30.308436   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809032   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809057   68676 pod_ready.go:82] duration metric: took 4m0.007962887s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:32.809066   68676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:46:32.809075   68676 pod_ready.go:39] duration metric: took 4m5.043455674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:32.809088   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:32.809115   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:32.809175   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:32.871610   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:32.871629   68676 cri.go:89] found id: ""
	I0927 01:46:32.871636   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:32.871682   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.878223   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:32.878296   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:32.925139   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:32.925173   68676 cri.go:89] found id: ""
	I0927 01:46:32.925182   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:32.925238   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.929961   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:32.930023   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:32.969777   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:32.969799   68676 cri.go:89] found id: ""
	I0927 01:46:32.969807   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:32.969854   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.979003   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:32.979088   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:33.029458   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:33.029532   68676 cri.go:89] found id: ""
	I0927 01:46:33.029546   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:33.029609   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.036703   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:33.036777   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:33.085041   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:33.085058   68676 cri.go:89] found id: ""
	I0927 01:46:33.085065   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:33.085125   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.090305   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:33.090372   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:33.136837   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.136857   68676 cri.go:89] found id: ""
	I0927 01:46:33.136865   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:33.136913   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.141483   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:33.141543   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:33.182913   68676 cri.go:89] found id: ""
	I0927 01:46:33.182939   68676 logs.go:276] 0 containers: []
	W0927 01:46:33.182950   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:33.182956   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:33.183002   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:33.237031   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:33.237055   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.237061   68676 cri.go:89] found id: ""
	I0927 01:46:33.237070   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:33.237121   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.241969   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.246733   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:33.246760   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:33.294096   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:33.294128   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.357981   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:33.358029   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.397465   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:33.397500   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:33.922831   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:33.922869   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:34.067117   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:34.067152   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:34.082191   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:34.082218   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:34.126416   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:34.126454   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:34.166714   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:34.166744   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:34.206601   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:34.206642   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:34.254352   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:34.254383   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:34.293318   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:34.293347   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:34.340365   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:34.340398   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:33.863782   69534 out.go:235]   - Configuring RBAC rules ...
	I0927 01:46:33.863922   69534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:46:33.871841   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:46:33.880047   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:46:33.884688   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:46:33.892057   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:46:33.895787   69534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:46:34.190553   69534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:46:34.619922   69534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:46:35.188452   69534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:46:35.189552   69534 kubeadm.go:310] 
	I0927 01:46:35.189661   69534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:46:35.189683   69534 kubeadm.go:310] 
	I0927 01:46:35.189791   69534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:46:35.189806   69534 kubeadm.go:310] 
	I0927 01:46:35.189845   69534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:46:35.189925   69534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:46:35.190002   69534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:46:35.190016   69534 kubeadm.go:310] 
	I0927 01:46:35.190095   69534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:46:35.190104   69534 kubeadm.go:310] 
	I0927 01:46:35.190181   69534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:46:35.190193   69534 kubeadm.go:310] 
	I0927 01:46:35.190264   69534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:46:35.190387   69534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:46:35.190484   69534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:46:35.190498   69534 kubeadm.go:310] 
	I0927 01:46:35.190593   69534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:46:35.190681   69534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:46:35.190691   69534 kubeadm.go:310] 
	I0927 01:46:35.190793   69534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.190948   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:46:35.191002   69534 kubeadm.go:310] 	--control-plane 
	I0927 01:46:35.191021   69534 kubeadm.go:310] 
	I0927 01:46:35.191134   69534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:46:35.191155   69534 kubeadm.go:310] 
	I0927 01:46:35.191281   69534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.191427   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:46:35.192564   69534 kubeadm.go:310] W0927 01:46:26.480521    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.192905   69534 kubeadm.go:310] W0927 01:46:26.481198    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.193078   69534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:46:35.193093   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:46:35.193102   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:46:35.194656   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:46:35.195835   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:46:35.207162   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:46:35.225999   69534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-368295 minikube.k8s.io/updated_at=2024_09_27T01_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=default-k8s-diff-port-368295 minikube.k8s.io/primary=true
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.258203   69534 ops.go:34] apiserver oom_adj: -16
	I0927 01:46:35.425367   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.926435   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.425611   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.925505   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.426329   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.926184   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.425745   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.925572   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.425831   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.508783   69534 kubeadm.go:1113] duration metric: took 4.282764601s to wait for elevateKubeSystemPrivileges
	I0927 01:46:39.508817   69534 kubeadm.go:394] duration metric: took 4m59.95903234s to StartCluster
	I0927 01:46:39.508838   69534 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.508930   69534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:46:39.510771   69534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.511005   69534 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:46:39.511071   69534 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:46:39.511194   69534 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511214   69534 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511230   69534 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511261   69534 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511276   69534 addons.go:243] addon metrics-server should already be in state true
	I0927 01:46:39.511325   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511243   69534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-368295"
	I0927 01:46:39.511225   69534 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511515   69534 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:46:39.511538   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511223   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511818   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511844   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511877   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511905   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.513051   69534 out.go:177] * Verifying Kubernetes components...
	I0927 01:46:39.514530   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:46:39.528031   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0927 01:46:39.528033   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0927 01:46:39.528446   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528603   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528997   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529022   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529085   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529101   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529210   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0927 01:46:39.529421   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.529721   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.529743   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.529724   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.530304   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.530358   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.530308   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.530423   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.530762   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.531337   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.531389   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.533286   69534 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.533306   69534 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:46:39.533333   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.533656   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.533692   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.546657   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I0927 01:46:39.546881   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0927 01:46:39.547298   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547327   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547842   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547860   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.547860   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547876   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.548220   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548239   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.548481   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.550160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550445   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0927 01:46:39.550744   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.551173   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.551195   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.551525   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.552620   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.552652   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.552838   69534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:46:39.552916   69534 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:46:36.914500   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:36.932340   68676 api_server.go:72] duration metric: took 4m14.883408931s to wait for apiserver process to appear ...
	I0927 01:46:36.932368   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:36.932407   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:36.932465   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:36.967757   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:36.967780   68676 cri.go:89] found id: ""
	I0927 01:46:36.967787   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:36.967832   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:36.972025   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:36.972105   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:37.018403   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.018431   68676 cri.go:89] found id: ""
	I0927 01:46:37.018448   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:37.018515   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.022868   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:37.022925   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:37.062443   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.062466   68676 cri.go:89] found id: ""
	I0927 01:46:37.062474   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:37.062534   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.066617   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:37.066674   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:37.101462   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.101489   68676 cri.go:89] found id: ""
	I0927 01:46:37.101500   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:37.101557   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.105564   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:37.105620   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:37.143692   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.143719   68676 cri.go:89] found id: ""
	I0927 01:46:37.143729   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:37.143775   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.148405   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:37.148484   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:37.184914   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.184943   68676 cri.go:89] found id: ""
	I0927 01:46:37.184954   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:37.185013   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.189486   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:37.189553   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:37.235389   68676 cri.go:89] found id: ""
	I0927 01:46:37.235416   68676 logs.go:276] 0 containers: []
	W0927 01:46:37.235424   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:37.235429   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:37.235480   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:37.276239   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.276266   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.276272   68676 cri.go:89] found id: ""
	I0927 01:46:37.276282   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:37.276338   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.280381   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.284423   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:37.284440   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.319790   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:37.319816   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.358818   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:37.358843   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.398137   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:37.398168   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.458672   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:37.458720   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:37.476148   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:37.476184   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:37.604190   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:37.604223   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:37.652633   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:37.652671   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.701240   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:37.701273   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.739555   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:37.739583   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.781721   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:37.781750   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:38.209361   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:38.209399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:38.261628   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:38.261658   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:39.554328   69534 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:39.554342   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:46:39.554362   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.554446   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:46:39.554456   69534 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:46:39.554469   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.557886   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.557982   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558121   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558269   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558350   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558369   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558466   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558620   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558690   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.558740   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558797   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.559026   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.559136   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.569570   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0927 01:46:39.569981   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.570364   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.570383   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.570746   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.570890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.572537   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.572779   69534 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.572795   69534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:46:39.572815   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.575104   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.575435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575595   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.575751   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.575844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.575960   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.784965   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:46:39.820986   69534 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829323   69534 node_ready.go:49] node "default-k8s-diff-port-368295" has status "Ready":"True"
	I0927 01:46:39.829346   69534 node_ready.go:38] duration metric: took 8.333848ms for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829358   69534 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:39.836143   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:39.940697   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.955239   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:46:39.955264   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:46:40.076199   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:40.080720   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:46:40.080746   69534 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:46:40.182698   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.182720   69534 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:46:40.219231   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.431480   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.431859   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.431875   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.431875   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.431889   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431898   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.432126   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.432146   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.432189   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442440   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.442468   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.442761   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442785   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.442815   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.044597   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.044627   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.044964   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.045013   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045021   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.045033   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.045041   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.045254   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045267   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.427791   69534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.208520131s)
	I0927 01:46:41.427843   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.427859   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428175   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.428184   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428196   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428205   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.428213   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428477   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428490   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428500   69534 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-368295"
	I0927 01:46:41.430399   69534 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0927 01:46:41.431795   69534 addons.go:510] duration metric: took 1.920729429s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0927 01:46:41.844911   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:40.832698   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:46:40.838244   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:46:40.839252   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:40.839270   68676 api_server.go:131] duration metric: took 3.906895557s to wait for apiserver health ...
	I0927 01:46:40.839277   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:40.839312   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:40.839373   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:40.879726   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:40.879753   68676 cri.go:89] found id: ""
	I0927 01:46:40.879763   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:40.879822   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.884233   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:40.884301   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:40.936189   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:40.936216   68676 cri.go:89] found id: ""
	I0927 01:46:40.936226   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:40.936289   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.940805   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:40.940885   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:40.978662   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:40.978683   68676 cri.go:89] found id: ""
	I0927 01:46:40.978693   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:40.978757   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.983357   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:40.983428   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:41.027134   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.027160   68676 cri.go:89] found id: ""
	I0927 01:46:41.027170   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:41.027229   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.031909   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:41.031986   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:41.077539   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.077568   68676 cri.go:89] found id: ""
	I0927 01:46:41.077577   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:41.077638   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.082237   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:41.082314   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:41.122413   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:41.122437   68676 cri.go:89] found id: ""
	I0927 01:46:41.122446   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:41.122501   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.127807   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:41.127872   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:41.174287   68676 cri.go:89] found id: ""
	I0927 01:46:41.174320   68676 logs.go:276] 0 containers: []
	W0927 01:46:41.174331   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:41.174339   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:41.174397   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:41.213192   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.213219   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:41.213225   68676 cri.go:89] found id: ""
	I0927 01:46:41.213234   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:41.213298   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.218168   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.227165   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:41.227194   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.269538   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:41.269571   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:41.691900   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:41.691943   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:41.709639   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:41.709682   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:41.829334   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:41.829366   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:41.886517   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:41.886552   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.933012   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:41.933035   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.973881   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:41.973921   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:42.032592   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:42.032628   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:42.087817   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:42.087856   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:42.162770   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:42.162808   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:42.213367   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:42.213399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:42.254937   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:42.254963   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:44.804112   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:46:44.804146   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.804153   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.804158   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.804162   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.804167   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.804171   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.804180   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.804186   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.804196   68676 system_pods.go:74] duration metric: took 3.964911623s to wait for pod list to return data ...
	I0927 01:46:44.804208   68676 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:44.807883   68676 default_sa.go:45] found service account: "default"
	I0927 01:46:44.807907   68676 default_sa.go:55] duration metric: took 3.689984ms for default service account to be created ...
	I0927 01:46:44.807917   68676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:44.812135   68676 system_pods.go:86] 8 kube-system pods found
	I0927 01:46:44.812161   68676 system_pods.go:89] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.812167   68676 system_pods.go:89] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.812174   68676 system_pods.go:89] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.812178   68676 system_pods.go:89] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.812185   68676 system_pods.go:89] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.812190   68676 system_pods.go:89] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.812200   68676 system_pods.go:89] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.812209   68676 system_pods.go:89] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.812222   68676 system_pods.go:126] duration metric: took 4.297317ms to wait for k8s-apps to be running ...
	I0927 01:46:44.812234   68676 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:44.812282   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:44.827911   68676 system_svc.go:56] duration metric: took 15.668154ms WaitForService to wait for kubelet
	I0927 01:46:44.827946   68676 kubeadm.go:582] duration metric: took 4m22.779012486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:44.827964   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:44.830688   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:44.830707   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:44.830716   68676 node_conditions.go:105] duration metric: took 2.747178ms to run NodePressure ...
	I0927 01:46:44.830725   68676 start.go:241] waiting for startup goroutines ...
	I0927 01:46:44.830732   68676 start.go:246] waiting for cluster config update ...
	I0927 01:46:44.830742   68676 start.go:255] writing updated cluster config ...
	I0927 01:46:44.830990   68676 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:44.881491   68676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:44.884307   68676 out.go:177] * Done! kubectl is now configured to use "no-preload-521072" cluster and "default" namespace by default
	I0927 01:46:42.397038   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:42.397331   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:43.845539   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:46.343584   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:48.842505   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.842527   69534 pod_ready.go:82] duration metric: took 9.006354643s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.842537   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846753   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.846771   69534 pod_ready.go:82] duration metric: took 4.228349ms for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846780   69534 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851234   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.851256   69534 pod_ready.go:82] duration metric: took 4.468727ms for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851265   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855648   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.855669   69534 pod_ready.go:82] duration metric: took 4.398439ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855678   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860882   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.860898   69534 pod_ready.go:82] duration metric: took 5.214278ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860906   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241149   69534 pod_ready.go:93] pod "kube-proxy-kqjdq" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.241180   69534 pod_ready.go:82] duration metric: took 380.266777ms for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241192   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642403   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.642437   69534 pod_ready.go:82] duration metric: took 401.235952ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642448   69534 pod_ready.go:39] duration metric: took 9.813073515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:49.642465   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:49.642518   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:49.658847   69534 api_server.go:72] duration metric: took 10.147811957s to wait for apiserver process to appear ...
	I0927 01:46:49.658877   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:49.658898   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:46:49.665899   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:46:49.666844   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:49.666867   69534 api_server.go:131] duration metric: took 7.982491ms to wait for apiserver health ...
	I0927 01:46:49.666876   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:49.843377   69534 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:49.843402   69534 system_pods.go:61] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:49.843408   69534 system_pods.go:61] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:49.843413   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:49.843417   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:49.843420   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:49.843425   69534 system_pods.go:61] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:49.843429   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:49.843437   69534 system_pods.go:61] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:49.843443   69534 system_pods.go:61] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:49.843454   69534 system_pods.go:74] duration metric: took 176.572041ms to wait for pod list to return data ...
	I0927 01:46:49.843466   69534 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:50.041031   69534 default_sa.go:45] found service account: "default"
	I0927 01:46:50.041053   69534 default_sa.go:55] duration metric: took 197.577565ms for default service account to be created ...
	I0927 01:46:50.041062   69534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:50.243807   69534 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:50.243834   69534 system_pods.go:89] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:50.243840   69534 system_pods.go:89] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:50.243845   69534 system_pods.go:89] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:50.243849   69534 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:50.243853   69534 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:50.243856   69534 system_pods.go:89] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:50.243860   69534 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:50.243866   69534 system_pods.go:89] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:50.243869   69534 system_pods.go:89] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:50.243879   69534 system_pods.go:126] duration metric: took 202.812704ms to wait for k8s-apps to be running ...
	I0927 01:46:50.243888   69534 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:50.243931   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:50.260175   69534 system_svc.go:56] duration metric: took 16.279433ms WaitForService to wait for kubelet
	I0927 01:46:50.260203   69534 kubeadm.go:582] duration metric: took 10.749173466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:50.260220   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:50.441020   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:50.441044   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:50.441052   69534 node_conditions.go:105] duration metric: took 180.827321ms to run NodePressure ...
	I0927 01:46:50.441062   69534 start.go:241] waiting for startup goroutines ...
	I0927 01:46:50.441081   69534 start.go:246] waiting for cluster config update ...
	I0927 01:46:50.441091   69534 start.go:255] writing updated cluster config ...
	I0927 01:46:50.441338   69534 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:50.492229   69534 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:50.494198   69534 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-368295" cluster and "default" namespace by default
	I0927 01:47:22.398756   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:47:22.399035   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:47:22.399051   69333 kubeadm.go:310] 
	I0927 01:47:22.399125   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:47:22.399167   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:47:22.399176   69333 kubeadm.go:310] 
	I0927 01:47:22.399242   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:47:22.399326   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:47:22.399452   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:47:22.399464   69333 kubeadm.go:310] 
	I0927 01:47:22.399627   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:47:22.399702   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:47:22.399750   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:47:22.399763   69333 kubeadm.go:310] 
	I0927 01:47:22.399908   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:47:22.400001   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:47:22.400014   69333 kubeadm.go:310] 
	I0927 01:47:22.400109   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:47:22.400218   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:47:22.400331   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:47:22.400406   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:47:22.400414   69333 kubeadm.go:310] 
	I0927 01:47:22.401157   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:47:22.401273   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:47:22.401342   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:47:22.401458   69333 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:47:22.401498   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:47:22.863316   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:47:22.878664   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:47:22.889118   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:47:22.889135   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:47:22.889173   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:47:22.898966   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:47:22.899035   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:47:22.911280   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:47:22.920628   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:47:22.920677   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:47:22.929860   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.938794   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:47:22.938839   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.947982   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:47:22.956785   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:47:22.956837   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:47:22.966186   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:47:23.039915   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:47:23.040017   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:47:23.189097   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:47:23.189274   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:47:23.189395   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:47:23.400731   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:47:23.402659   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:47:23.402776   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:47:23.402855   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:47:23.402959   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:47:23.403040   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:47:23.403162   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:47:23.403349   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:47:23.403935   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:47:23.404260   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:47:23.404563   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:47:23.404896   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:47:23.405050   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:47:23.405121   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:47:23.466908   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:47:23.717009   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:47:23.766225   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:47:23.961488   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:47:23.987846   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:47:23.988724   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:47:23.988790   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:47:24.130550   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:47:24.132276   69333 out.go:235]   - Booting up control plane ...
	I0927 01:47:24.132386   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:47:24.146415   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:47:24.147664   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:47:24.148443   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:47:24.151623   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:48:04.153587   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:48:04.153934   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:04.154129   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:09.154634   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:09.154883   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:19.155638   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:19.155844   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:39.156224   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:39.156429   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155507   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:49:19.155754   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155779   69333 kubeadm.go:310] 
	I0927 01:49:19.155872   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:49:19.155947   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:49:19.155958   69333 kubeadm.go:310] 
	I0927 01:49:19.156026   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:49:19.156077   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:49:19.156230   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:49:19.156242   69333 kubeadm.go:310] 
	I0927 01:49:19.156379   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:49:19.156434   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:49:19.156486   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:49:19.156506   69333 kubeadm.go:310] 
	I0927 01:49:19.156628   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:49:19.156756   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:49:19.156775   69333 kubeadm.go:310] 
	I0927 01:49:19.156925   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:49:19.157022   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:49:19.157112   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:49:19.157191   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:49:19.157202   69333 kubeadm.go:310] 
	I0927 01:49:19.158023   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:49:19.158149   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:49:19.158277   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:49:19.158357   69333 kubeadm.go:394] duration metric: took 7m56.829434682s to StartCluster
	I0927 01:49:19.158404   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:49:19.158477   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:49:19.200705   69333 cri.go:89] found id: ""
	I0927 01:49:19.200729   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.200736   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:49:19.200742   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:49:19.200791   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:49:19.240252   69333 cri.go:89] found id: ""
	I0927 01:49:19.240274   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.240285   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:49:19.240292   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:49:19.240352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:49:19.275802   69333 cri.go:89] found id: ""
	I0927 01:49:19.275826   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.275834   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:49:19.275840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:49:19.275894   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:49:19.309317   69333 cri.go:89] found id: ""
	I0927 01:49:19.309342   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.309350   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:49:19.309357   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:49:19.309414   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:49:19.344778   69333 cri.go:89] found id: ""
	I0927 01:49:19.344806   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.344817   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:49:19.344823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:49:19.344882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:49:19.379394   69333 cri.go:89] found id: ""
	I0927 01:49:19.379426   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.379438   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:49:19.379445   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:49:19.379502   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:49:19.415349   69333 cri.go:89] found id: ""
	I0927 01:49:19.415376   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.415384   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:49:19.415390   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:49:19.415438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:49:19.453357   69333 cri.go:89] found id: ""
	I0927 01:49:19.453381   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.453389   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:49:19.453397   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:49:19.453409   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:49:19.530384   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:49:19.530405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:49:19.530423   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:49:19.643418   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:49:19.643453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:49:19.688825   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:49:19.688861   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:49:19.745945   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:49:19.745983   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0927 01:49:19.762685   69333 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:49:19.762739   69333 out.go:270] * 
	W0927 01:49:19.762791   69333 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.762804   69333 out.go:270] * 
	W0927 01:49:19.763605   69333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:49:19.767393   69333 out.go:201] 
	W0927 01:49:19.768622   69333 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.768671   69333 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:49:19.768690   69333 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:49:19.771036   69333 out.go:201] 
	
	
	==> CRI-O <==
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.666625387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727401761666603878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc14724f-9448-4974-9463-f29be1af345d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.667158370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d502cdb5-ec99-4b4d-8447-69598f91db6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.667227352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d502cdb5-ec99-4b4d-8447-69598f91db6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.667261567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d502cdb5-ec99-4b4d-8447-69598f91db6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.700384126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=372046f8-2da2-429c-bfa6-10e85ab1883f name=/runtime.v1.RuntimeService/Version
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.700490233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=372046f8-2da2-429c-bfa6-10e85ab1883f name=/runtime.v1.RuntimeService/Version
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.702002177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=766210c9-0bb6-4356-b024-296a0545f475 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.702416543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727401761702390872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=766210c9-0bb6-4356-b024-296a0545f475 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.702872273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e475976-45a1-4e86-a2d3-288009701d1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.702961745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e475976-45a1-4e86-a2d3-288009701d1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.703008750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3e475976-45a1-4e86-a2d3-288009701d1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.734662599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=641f8321-321a-4354-af4b-d206fb4e8cac name=/runtime.v1.RuntimeService/Version
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.734751506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=641f8321-321a-4354-af4b-d206fb4e8cac name=/runtime.v1.RuntimeService/Version
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.736034304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbc621a7-0b6c-4932-96b4-f99c4e0268c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.736403135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727401761736384424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbc621a7-0b6c-4932-96b4-f99c4e0268c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.736944916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03efc4d2-01f7-4215-ae74-d23311c7ff2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.737010979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03efc4d2-01f7-4215-ae74-d23311c7ff2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.737042845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=03efc4d2-01f7-4215-ae74-d23311c7ff2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.768243298Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bcb9804-1527-42ae-bb6a-504f286f42a0 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.768353341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bcb9804-1527-42ae-bb6a-504f286f42a0 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.769853626Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=964c66c2-c6d4-4fc0-864e-0017c7efaf8b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.770230996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727401761770209654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=964c66c2-c6d4-4fc0-864e-0017c7efaf8b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.770717732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed91ffd0-3a32-4236-9355-c3dd0f455321 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.770834773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed91ffd0-3a32-4236-9355-c3dd0f455321 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:49:21 old-k8s-version-612261 crio[628]: time="2024-09-27 01:49:21.770870968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ed91ffd0-3a32-4236-9355-c3dd0f455321 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep27 01:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051380] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040023] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep27 01:41] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.490738] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.637888] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.070410] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.081325] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.210782] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.144654] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.262711] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.839165] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064025] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.828367] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +11.175171] kauditd_printk_skb: 46 callbacks suppressed
	[Sep27 01:45] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Sep27 01:47] systemd-fstab-generator[5347]: Ignoring "noauto" option for root device
	[  +0.069319] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:49:21 up 8 min,  0 users,  load average: 0.23, 0.16, 0.10
	Linux old-k8s-version-612261 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000bbad20, 0xc000b93980, 0x23, 0xc000423780)
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]: created by internal/singleflight.(*Group).DoChan
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]: goroutine 163 [runnable]:
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]: net._C2func_getaddrinfo(0xc000b78ec0, 0x0, 0xc000be1800, 0xc0009d0dd8, 0x0, 0x0, 0x0)
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]:         _cgo_gotypes.go:94 +0x55
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]: net.cgoLookupIPCNAME.func1(0xc000b78ec0, 0x20, 0x20, 0xc000be1800, 0xc0009d0dd8, 0x0, 0xc0007f8ea0, 0x57a492)
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000b93950, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]: net.cgoIPLookup(0xc000bd1bc0, 0x48ab5d6, 0x3, 0xc000b93950, 0x1f)
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]: created by net.cgoLookupIP
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5523]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Sep 27 01:49:19 old-k8s-version-612261 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 27 01:49:19 old-k8s-version-612261 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 27 01:49:19 old-k8s-version-612261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 27 01:49:19 old-k8s-version-612261 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 27 01:49:19 old-k8s-version-612261 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5580]: I0927 01:49:19.779844    5580 server.go:416] Version: v1.20.0
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5580]: I0927 01:49:19.780138    5580 server.go:837] Client rotation is on, will bootstrap in background
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5580]: I0927 01:49:19.782279    5580 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5580]: W0927 01:49:19.783250    5580 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 27 01:49:19 old-k8s-version-612261 kubelet[5580]: I0927 01:49:19.783676    5580 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 2 (221.951476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-612261" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (716.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295: exit status 3 (3.168160982s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:37:39.079661   69400 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host
	E0927 01:37:39.079689   69400 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-368295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-368295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151942752s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-368295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295: exit status 3 (3.063711802s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:37:48.295764   69481 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host
	E0927 01:37:48.295793   69481 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.83:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-368295" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0927 01:46:33.559939   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-245911 -n embed-certs-245911
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-27 01:55:05.966537562 +0000 UTC m=+6022.152145814
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-245911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-245911 logs -n 25: (2.217111132s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| stop    | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:32 UTC |
	| start   | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:33 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-521072             | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-595331                              | cert-expiration-595331       | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| delete  | -p                                                     | disable-driver-mounts-630210 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | disable-driver-mounts-630210                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:35 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-245911            | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-368295  | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-521072                  | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-612261        | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-245911                 | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-612261             | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-368295       | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:37:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:37:48.335921   69534 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:37:48.336188   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336196   69534 out.go:358] Setting ErrFile to fd 2...
	I0927 01:37:48.336201   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336368   69534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:37:48.336901   69534 out.go:352] Setting JSON to false
	I0927 01:37:48.337754   69534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8413,"bootTime":1727392655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:37:48.337841   69534 start.go:139] virtualization: kvm guest
	I0927 01:37:48.340035   69534 out.go:177] * [default-k8s-diff-port-368295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:37:48.341151   69534 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:37:48.341211   69534 notify.go:220] Checking for updates...
	I0927 01:37:48.343607   69534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:37:48.344933   69534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:37:48.346113   69534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:37:48.347142   69534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:37:48.348261   69534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:37:48.349842   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:37:48.350212   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.350278   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.365272   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0927 01:37:48.365662   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.366137   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.366162   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.366548   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.366713   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.366938   69534 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:37:48.367236   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.367265   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.381678   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0927 01:37:48.382169   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.382627   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.382650   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.382911   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.383023   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.415092   69534 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:37:48.416340   69534 start.go:297] selected driver: kvm2
	I0927 01:37:48.416354   69534 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.416459   69534 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:37:48.417093   69534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.417164   69534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:37:48.432138   69534 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:37:48.432534   69534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:37:48.432563   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:37:48.432604   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:37:48.432635   69534 start.go:340] cluster config:
	{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.432737   69534 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.435057   69534 out.go:177] * Starting "default-k8s-diff-port-368295" primary control-plane node in "default-k8s-diff-port-368295" cluster
	I0927 01:37:48.436502   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:37:48.436543   69534 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 01:37:48.436557   69534 cache.go:56] Caching tarball of preloaded images
	I0927 01:37:48.436624   69534 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:37:48.436634   69534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 01:37:48.436718   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:37:48.436885   69534 start.go:360] acquireMachinesLock for default-k8s-diff-port-368295: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:37:50.823565   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:53.895575   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:59.975554   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:03.047567   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:09.127558   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:12.199592   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:18.279516   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:21.351643   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:27.435515   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:30.503604   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:36.583590   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:39.655593   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:45.735581   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:48.807587   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:54.887542   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:57.959601   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:04.039570   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:07.111555   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:13.191559   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:16.263625   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:22.343607   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:25.415561   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:31.495531   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:34.567598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:40.647577   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:43.719602   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:49.799620   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:52.871596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:58.951600   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:02.023635   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:08.103596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:11.175614   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:17.255583   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:20.327522   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:26.407598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:29.479580   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:32.484148   69234 start.go:364] duration metric: took 3m6.827897292s to acquireMachinesLock for "embed-certs-245911"
	I0927 01:40:32.484202   69234 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:32.484210   69234 fix.go:54] fixHost starting: 
	I0927 01:40:32.484708   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:32.484758   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:32.500356   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0927 01:40:32.500869   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:32.501356   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:40:32.501376   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:32.501678   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:32.501872   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:32.502014   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:40:32.503863   69234 fix.go:112] recreateIfNeeded on embed-certs-245911: state=Stopped err=<nil>
	I0927 01:40:32.503884   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	W0927 01:40:32.504047   69234 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:32.506829   69234 out.go:177] * Restarting existing kvm2 VM for "embed-certs-245911" ...
	I0927 01:40:32.481407   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:32.481445   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.481786   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:40:32.481815   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.482031   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:40:32.483999   68676 machine.go:96] duration metric: took 4m37.428764548s to provisionDockerMachine
	I0927 01:40:32.484048   68676 fix.go:56] duration metric: took 4m37.449461246s for fixHost
	I0927 01:40:32.484055   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 4m37.449534693s
	W0927 01:40:32.484075   68676 start.go:714] error starting host: provision: host is not running
	W0927 01:40:32.484176   68676 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0927 01:40:32.484183   68676 start.go:729] Will try again in 5 seconds ...
	I0927 01:40:32.508417   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Start
	I0927 01:40:32.508598   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring networks are active...
	I0927 01:40:32.509477   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network default is active
	I0927 01:40:32.509830   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network mk-embed-certs-245911 is active
	I0927 01:40:32.510208   69234 main.go:141] libmachine: (embed-certs-245911) Getting domain xml...
	I0927 01:40:32.510838   69234 main.go:141] libmachine: (embed-certs-245911) Creating domain...
	I0927 01:40:33.718381   69234 main.go:141] libmachine: (embed-certs-245911) Waiting to get IP...
	I0927 01:40:33.719223   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.719554   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.719611   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.719550   70125 retry.go:31] will retry after 265.21442ms: waiting for machine to come up
	I0927 01:40:33.986199   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.986700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.986728   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.986658   70125 retry.go:31] will retry after 308.926274ms: waiting for machine to come up
	I0927 01:40:34.297317   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.297734   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.297755   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.297697   70125 retry.go:31] will retry after 466.52815ms: waiting for machine to come up
	I0927 01:40:34.765171   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.765616   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.765643   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.765570   70125 retry.go:31] will retry after 510.417499ms: waiting for machine to come up
	I0927 01:40:35.277175   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.277547   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.277576   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.277488   70125 retry.go:31] will retry after 522.865286ms: waiting for machine to come up
	I0927 01:40:37.485696   68676 start.go:360] acquireMachinesLock for no-preload-521072: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:40:35.802177   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.802620   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.802646   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.802584   70125 retry.go:31] will retry after 611.490499ms: waiting for machine to come up
	I0927 01:40:36.415249   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:36.415733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:36.415793   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:36.415709   70125 retry.go:31] will retry after 744.420766ms: waiting for machine to come up
	I0927 01:40:37.161647   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:37.162076   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:37.162112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:37.162022   70125 retry.go:31] will retry after 1.464523837s: waiting for machine to come up
	I0927 01:40:38.627935   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:38.628275   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:38.628302   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:38.628237   70125 retry.go:31] will retry after 1.840524237s: waiting for machine to come up
	I0927 01:40:40.471433   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:40.471823   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:40.471851   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:40.471781   70125 retry.go:31] will retry after 1.9424331s: waiting for machine to come up
	I0927 01:40:42.416527   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:42.416978   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:42.417007   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:42.416935   70125 retry.go:31] will retry after 2.553410529s: waiting for machine to come up
	I0927 01:40:44.973083   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:44.973446   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:44.973465   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:44.973402   70125 retry.go:31] will retry after 3.286267983s: waiting for machine to come up
	I0927 01:40:48.260792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:48.261216   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:48.261241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:48.261179   70125 retry.go:31] will retry after 3.302667041s: waiting for machine to come up
	I0927 01:40:52.800240   69333 start.go:364] duration metric: took 3m25.347970249s to acquireMachinesLock for "old-k8s-version-612261"
	I0927 01:40:52.800310   69333 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:52.800317   69333 fix.go:54] fixHost starting: 
	I0927 01:40:52.800742   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:52.800800   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:52.818217   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0927 01:40:52.818644   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:52.819065   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:40:52.819086   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:52.819408   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:52.819544   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:40:52.819646   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetState
	I0927 01:40:52.820921   69333 fix.go:112] recreateIfNeeded on old-k8s-version-612261: state=Stopped err=<nil>
	I0927 01:40:52.820956   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	W0927 01:40:52.821110   69333 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:52.823209   69333 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-612261" ...
	I0927 01:40:51.567691   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568205   69234 main.go:141] libmachine: (embed-certs-245911) Found IP for machine: 192.168.39.158
	I0927 01:40:51.568241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has current primary IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568250   69234 main.go:141] libmachine: (embed-certs-245911) Reserving static IP address...
	I0927 01:40:51.568731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.568764   69234 main.go:141] libmachine: (embed-certs-245911) DBG | skip adding static IP to network mk-embed-certs-245911 - found existing host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"}
	I0927 01:40:51.568781   69234 main.go:141] libmachine: (embed-certs-245911) Reserved static IP address: 192.168.39.158
	I0927 01:40:51.568798   69234 main.go:141] libmachine: (embed-certs-245911) Waiting for SSH to be available...
	I0927 01:40:51.568806   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Getting to WaitForSSH function...
	I0927 01:40:51.570819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571139   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.571167   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571321   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH client type: external
	I0927 01:40:51.571370   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa (-rw-------)
	I0927 01:40:51.571401   69234 main.go:141] libmachine: (embed-certs-245911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:40:51.571414   69234 main.go:141] libmachine: (embed-certs-245911) DBG | About to run SSH command:
	I0927 01:40:51.571422   69234 main.go:141] libmachine: (embed-certs-245911) DBG | exit 0
	I0927 01:40:51.691525   69234 main.go:141] libmachine: (embed-certs-245911) DBG | SSH cmd err, output: <nil>: 
	I0927 01:40:51.691953   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetConfigRaw
	I0927 01:40:51.692573   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:51.695121   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695541   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.695572   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695871   69234 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/config.json ...
	I0927 01:40:51.696087   69234 machine.go:93] provisionDockerMachine start ...
	I0927 01:40:51.696109   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:51.696312   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.698740   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699086   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.699112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699229   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.699415   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699552   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699679   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.699810   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.699998   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.700011   69234 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:40:51.799534   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:40:51.799559   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799764   69234 buildroot.go:166] provisioning hostname "embed-certs-245911"
	I0927 01:40:51.799792   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.802464   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.802844   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802960   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.803131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803290   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803502   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.803672   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.803868   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.803888   69234 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-245911 && echo "embed-certs-245911" | sudo tee /etc/hostname
	I0927 01:40:51.917988   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-245911
	
	I0927 01:40:51.918019   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.920484   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.920800   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.920831   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.921041   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.921224   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921383   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921511   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.921693   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.921883   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.921901   69234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-245911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-245911/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-245911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:40:52.028582   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:52.028609   69234 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:40:52.028672   69234 buildroot.go:174] setting up certificates
	I0927 01:40:52.028686   69234 provision.go:84] configureAuth start
	I0927 01:40:52.028704   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:52.029001   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.031742   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032088   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.032117   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032273   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.034392   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.034754   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034905   69234 provision.go:143] copyHostCerts
	I0927 01:40:52.034956   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:40:52.034969   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:40:52.035042   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:40:52.035172   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:40:52.035185   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:40:52.035224   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:40:52.035319   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:40:52.035329   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:40:52.035363   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:40:52.035433   69234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-245911 san=[127.0.0.1 192.168.39.158 embed-certs-245911 localhost minikube]
	I0927 01:40:52.206591   69234 provision.go:177] copyRemoteCerts
	I0927 01:40:52.206657   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:40:52.206724   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.209445   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209770   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.209792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209995   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.210234   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.210416   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.210578   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.290176   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:40:52.313645   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:40:52.336446   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:40:52.359182   69234 provision.go:87] duration metric: took 330.481958ms to configureAuth
	I0927 01:40:52.359214   69234 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:40:52.359464   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:40:52.359551   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.362163   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362488   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.362513   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362670   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.362826   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.362976   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.363133   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.363334   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.363532   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.363553   69234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:40:52.574326   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:40:52.574354   69234 machine.go:96] duration metric: took 878.253718ms to provisionDockerMachine
	I0927 01:40:52.574368   69234 start.go:293] postStartSetup for "embed-certs-245911" (driver="kvm2")
	I0927 01:40:52.574381   69234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:40:52.574398   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.574688   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:40:52.574714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.577727   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578035   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.578060   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578227   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.578411   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.578555   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.578735   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.658636   69234 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:40:52.663048   69234 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:40:52.663077   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:40:52.663147   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:40:52.663223   69234 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:40:52.663322   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:40:52.673347   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:52.697092   69234 start.go:296] duration metric: took 122.71069ms for postStartSetup
	I0927 01:40:52.697126   69234 fix.go:56] duration metric: took 20.212915975s for fixHost
	I0927 01:40:52.697145   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.699817   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700173   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.700202   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700364   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.700558   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700735   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700921   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.701097   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.701269   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.701285   69234 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:40:52.800080   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401252.775762391
	
	I0927 01:40:52.800102   69234 fix.go:216] guest clock: 1727401252.775762391
	I0927 01:40:52.800111   69234 fix.go:229] Guest: 2024-09-27 01:40:52.775762391 +0000 UTC Remote: 2024-09-27 01:40:52.697129165 +0000 UTC m=+207.179045808 (delta=78.633226ms)
	I0927 01:40:52.800145   69234 fix.go:200] guest clock delta is within tolerance: 78.633226ms
	I0927 01:40:52.800152   69234 start.go:83] releasing machines lock for "embed-certs-245911", held for 20.315972034s
	I0927 01:40:52.800183   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.800495   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.803196   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803657   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.803700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803874   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804419   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804610   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804731   69234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:40:52.804771   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.804813   69234 ssh_runner.go:195] Run: cat /version.json
	I0927 01:40:52.804837   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.807320   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807346   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807680   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807759   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807807   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807916   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808070   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808150   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808262   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808331   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808384   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808468   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.808522   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.908963   69234 ssh_runner.go:195] Run: systemctl --version
	I0927 01:40:52.915158   69234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:40:53.067605   69234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:40:53.074171   69234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:40:53.074241   69234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:40:53.091718   69234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:40:53.091742   69234 start.go:495] detecting cgroup driver to use...
	I0927 01:40:53.091813   69234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:40:53.108730   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:40:53.122920   69234 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:40:53.122984   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:40:53.137487   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:40:53.152420   69234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:40:53.269491   69234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:40:53.417893   69234 docker.go:233] disabling docker service ...
	I0927 01:40:53.417951   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:40:53.442201   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:40:53.459920   69234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:40:53.589768   69234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:40:53.719203   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:40:53.733145   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:40:53.751853   69234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:40:53.751919   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.763230   69234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:40:53.763294   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.774864   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.786149   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.797167   69234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:40:53.808495   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.819285   69234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.838497   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.850490   69234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:40:53.860309   69234 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:40:53.860377   69234 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:40:53.875533   69234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:40:53.885752   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:54.014352   69234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:40:54.107866   69234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:40:54.107926   69234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:40:54.113206   69234 start.go:563] Will wait 60s for crictl version
	I0927 01:40:54.113256   69234 ssh_runner.go:195] Run: which crictl
	I0927 01:40:54.117229   69234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:40:54.156365   69234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:40:54.156459   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.183974   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.214440   69234 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:40:54.215714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:54.218624   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.218975   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:54.219013   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.219180   69234 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 01:40:54.223450   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:54.236761   69234 kubeadm.go:883] updating cluster {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:40:54.236923   69234 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:40:54.236989   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:54.276635   69234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:40:54.276708   69234 ssh_runner.go:195] Run: which lz4
	I0927 01:40:54.281055   69234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:40:54.285439   69234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:40:54.285472   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:40:52.824650   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .Start
	I0927 01:40:52.824802   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring networks are active...
	I0927 01:40:52.825590   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network default is active
	I0927 01:40:52.825908   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network mk-old-k8s-version-612261 is active
	I0927 01:40:52.826326   69333 main.go:141] libmachine: (old-k8s-version-612261) Getting domain xml...
	I0927 01:40:52.827108   69333 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:40:54.071322   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting to get IP...
	I0927 01:40:54.072357   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.072756   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.072821   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.072738   70279 retry.go:31] will retry after 264.648837ms: waiting for machine to come up
	I0927 01:40:54.339366   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.339799   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.339827   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.339731   70279 retry.go:31] will retry after 343.432635ms: waiting for machine to come up
	I0927 01:40:54.685260   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.685746   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.685780   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.685714   70279 retry.go:31] will retry after 455.276623ms: waiting for machine to come up
	I0927 01:40:55.142206   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.142679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.142708   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.142637   70279 retry.go:31] will retry after 419.074502ms: waiting for machine to come up
	I0927 01:40:55.563324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.565342   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.565368   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.565287   70279 retry.go:31] will retry after 587.161471ms: waiting for machine to come up
	I0927 01:40:56.154584   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.155182   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.155220   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.155109   70279 retry.go:31] will retry after 782.426926ms: waiting for machine to come up
	I0927 01:40:56.938784   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.939201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.939228   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.939132   70279 retry.go:31] will retry after 781.231902ms: waiting for machine to come up
	I0927 01:40:55.723619   69234 crio.go:462] duration metric: took 1.442589436s to copy over tarball
	I0927 01:40:55.723705   69234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:40:57.775673   69234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051936146s)
	I0927 01:40:57.775697   69234 crio.go:469] duration metric: took 2.052045538s to extract the tarball
	I0927 01:40:57.775704   69234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:40:57.812769   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:57.853219   69234 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:40:57.853240   69234 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:40:57.853248   69234 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0927 01:40:57.853354   69234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-245911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:40:57.853495   69234 ssh_runner.go:195] Run: crio config
	I0927 01:40:57.908273   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:40:57.908301   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:40:57.908322   69234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:40:57.908356   69234 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-245911 NodeName:embed-certs-245911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:40:57.908542   69234 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-245911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:40:57.908613   69234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:40:57.918923   69234 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:40:57.919021   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:40:57.928576   69234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0927 01:40:57.945515   69234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:40:57.962239   69234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0927 01:40:57.979722   69234 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0927 01:40:57.983709   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:57.996181   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:58.119502   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:40:58.137022   69234 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911 for IP: 192.168.39.158
	I0927 01:40:58.137048   69234 certs.go:194] generating shared ca certs ...
	I0927 01:40:58.137068   69234 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:40:58.137250   69234 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:40:58.137312   69234 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:40:58.137324   69234 certs.go:256] generating profile certs ...
	I0927 01:40:58.137444   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/client.key
	I0927 01:40:58.137522   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key.e289c840
	I0927 01:40:58.137574   69234 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key
	I0927 01:40:58.137731   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:40:58.137774   69234 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:40:58.137787   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:40:58.137819   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:40:58.137850   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:40:58.137883   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:40:58.137928   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:58.138551   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:40:58.179399   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:40:58.211297   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:40:58.245549   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:40:58.276837   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0927 01:40:58.313750   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:40:58.338145   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:40:58.361373   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:40:58.384790   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:40:58.407617   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:40:58.430621   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:40:58.453382   69234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:40:58.470177   69234 ssh_runner.go:195] Run: openssl version
	I0927 01:40:58.476280   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:40:58.489039   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493726   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493780   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.499856   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:40:58.511032   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:40:58.521694   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.525991   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.526031   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.531619   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:40:58.542017   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:40:58.552591   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557047   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557086   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.562874   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:40:58.574052   69234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:40:58.578537   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:40:58.584323   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:40:58.590033   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:40:58.596013   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:40:58.601572   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:40:58.606980   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:40:58.612554   69234 kubeadm.go:392] StartCluster: {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:40:58.612648   69234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:40:58.612704   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.649228   69234 cri.go:89] found id: ""
	I0927 01:40:58.649306   69234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:40:58.661599   69234 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:40:58.661628   69234 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:40:58.661688   69234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:40:58.671907   69234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:40:58.672851   69234 kubeconfig.go:125] found "embed-certs-245911" server: "https://192.168.39.158:8443"
	I0927 01:40:58.674753   69234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:40:58.684614   69234 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.158
	I0927 01:40:58.684643   69234 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:40:58.684652   69234 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:40:58.684715   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.726714   69234 cri.go:89] found id: ""
	I0927 01:40:58.726816   69234 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:40:58.743675   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:40:58.753456   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:40:58.753485   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:40:58.753535   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:40:58.762724   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:40:58.762821   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:40:58.772558   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:40:58.781732   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:40:58.781790   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:40:58.791109   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.800066   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:40:58.800127   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.809338   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:40:58.818214   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:40:58.818260   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:40:58.828049   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:40:58.837606   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:58.942395   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.758951   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.966377   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.036702   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.126663   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:00.126743   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:40:57.722147   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:57.722637   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:57.722657   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:57.722593   70279 retry.go:31] will retry after 1.223133601s: waiting for machine to come up
	I0927 01:40:58.947836   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:58.948362   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:58.948388   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:58.948326   70279 retry.go:31] will retry after 1.155368003s: waiting for machine to come up
	I0927 01:41:00.105812   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:00.106288   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:00.106356   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:00.106280   70279 retry.go:31] will retry after 2.324904017s: waiting for machine to come up
	I0927 01:41:00.627542   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.126971   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.626940   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.127478   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.176746   69234 api_server.go:72] duration metric: took 2.050081672s to wait for apiserver process to appear ...
	I0927 01:41:02.176775   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:02.176798   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:02.177442   69234 api_server.go:269] stopped: https://192.168.39.158:8443/healthz: Get "https://192.168.39.158:8443/healthz": dial tcp 192.168.39.158:8443: connect: connection refused
	I0927 01:41:02.677488   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.824718   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.824748   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:04.824763   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.850790   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.850820   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:05.177167   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.201660   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.201696   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:02.432597   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:02.433066   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:02.433096   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:02.433026   70279 retry.go:31] will retry after 2.598889471s: waiting for machine to come up
	I0927 01:41:05.034614   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:05.035001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:05.035023   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:05.034973   70279 retry.go:31] will retry after 3.064943329s: waiting for machine to come up
	I0927 01:41:05.677514   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.683506   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.683543   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.177064   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.181304   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.181339   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.676872   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.681269   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.681297   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.176902   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.181397   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.181425   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.677457   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.682057   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.682087   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:08.177696   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:08.181752   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:41:08.188257   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:08.188278   69234 api_server.go:131] duration metric: took 6.011495616s to wait for apiserver health ...
	I0927 01:41:08.188285   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:41:08.188291   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:08.190206   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:08.191584   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:08.202370   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:08.224843   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:08.234247   69234 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:08.234275   69234 system_pods.go:61] "coredns-7c65d6cfc9-f2vxv" [3eed941e-e943-490b-a0a8-d543cec18a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:08.234284   69234 system_pods.go:61] "etcd-embed-certs-245911" [f88581ff-3747-4fe5-a4a2-6259c3b4554e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:08.234291   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [3f1efb25-6e30-4d5f-baba-3e98b6fe531e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:08.234298   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [a624fc8d-fbe3-4b63-8a88-5f8069b21095] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:08.234302   69234 system_pods.go:61] "kube-proxy-pjf8v" [a1b76e67-803a-43fe-bff6-a4b0ddc246a1] Running
	I0927 01:41:08.234309   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [0f7c146b-e2b7-4110-b010-f4599d0da410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:08.234313   69234 system_pods.go:61] "metrics-server-6867b74b74-k8mdf" [6d1e68fb-5187-4bc6-abdb-44f598e351c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:08.234317   69234 system_pods.go:61] "storage-provisioner" [dc0a7806-bee8-4127-8218-b2e48fa8500b] Running
	I0927 01:41:08.234323   69234 system_pods.go:74] duration metric: took 9.462578ms to wait for pod list to return data ...
	I0927 01:41:08.234333   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:08.238433   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:08.238455   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:08.238468   69234 node_conditions.go:105] duration metric: took 4.128775ms to run NodePressure ...
	I0927 01:41:08.238483   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:08.502161   69234 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506267   69234 kubeadm.go:739] kubelet initialised
	I0927 01:41:08.506290   69234 kubeadm.go:740] duration metric: took 4.099692ms waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506299   69234 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:08.510964   69234 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.515262   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515279   69234 pod_ready.go:82] duration metric: took 4.294632ms for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.515286   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515298   69234 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.519627   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519641   69234 pod_ready.go:82] duration metric: took 4.313975ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.519648   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519653   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.523152   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523165   69234 pod_ready.go:82] duration metric: took 3.50412ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.523177   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523186   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.628811   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628847   69234 pod_ready.go:82] duration metric: took 105.648464ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.628859   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628868   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027358   69234 pod_ready.go:93] pod "kube-proxy-pjf8v" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:09.027383   69234 pod_ready.go:82] duration metric: took 398.507928ms for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027393   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.101834   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:08.102324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:08.102358   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:08.102283   70279 retry.go:31] will retry after 4.242138543s: waiting for machine to come up
	I0927 01:41:13.708458   69534 start.go:364] duration metric: took 3m25.271525685s to acquireMachinesLock for "default-k8s-diff-port-368295"
	I0927 01:41:13.708525   69534 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:13.708533   69534 fix.go:54] fixHost starting: 
	I0927 01:41:13.708923   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:13.708979   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:13.726306   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0927 01:41:13.726732   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:13.727228   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:41:13.727252   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:13.727579   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:13.727781   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:13.727975   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:41:13.729621   69534 fix.go:112] recreateIfNeeded on default-k8s-diff-port-368295: state=Stopped err=<nil>
	I0927 01:41:13.729657   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	W0927 01:41:13.729826   69534 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:13.731730   69534 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-368295" ...
	I0927 01:41:12.347378   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.347831   69333 main.go:141] libmachine: (old-k8s-version-612261) Found IP for machine: 192.168.72.129
	I0927 01:41:12.347855   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserving static IP address...
	I0927 01:41:12.347872   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has current primary IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.348468   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.348494   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserved static IP address: 192.168.72.129
	I0927 01:41:12.348507   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | skip adding static IP to network mk-old-k8s-version-612261 - found existing host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"}
	I0927 01:41:12.348518   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting for SSH to be available...
	I0927 01:41:12.348537   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Getting to WaitForSSH function...
	I0927 01:41:12.350917   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351287   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.351335   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351464   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH client type: external
	I0927 01:41:12.351485   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa (-rw-------)
	I0927 01:41:12.351516   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:12.351525   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | About to run SSH command:
	I0927 01:41:12.351533   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | exit 0
	I0927 01:41:12.471347   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:12.471724   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:41:12.472352   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.474886   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475299   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.475340   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475628   69333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:41:12.475857   69333 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:12.475879   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:12.476115   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.478594   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.478918   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.478945   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.479126   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.479340   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479536   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479695   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.479859   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.480093   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.480116   69333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:12.579536   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:12.579562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579785   69333 buildroot.go:166] provisioning hostname "old-k8s-version-612261"
	I0927 01:41:12.579798   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579965   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.582679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.583027   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583166   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.583372   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583727   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.583924   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.584169   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.584187   69333 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-612261 && echo "old-k8s-version-612261" | sudo tee /etc/hostname
	I0927 01:41:12.702223   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-612261
	
	I0927 01:41:12.702252   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.705201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705564   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.705601   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705817   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.706012   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706154   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706344   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.706538   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.706720   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.706738   69333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-612261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-612261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-612261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:12.816316   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:12.816343   69333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:12.816376   69333 buildroot.go:174] setting up certificates
	I0927 01:41:12.816386   69333 provision.go:84] configureAuth start
	I0927 01:41:12.816394   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.816678   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.819190   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819487   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.819511   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.821843   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822166   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.822203   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822382   69333 provision.go:143] copyHostCerts
	I0927 01:41:12.822453   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:12.822466   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:12.822533   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:12.822641   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:12.822650   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:12.822682   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:12.822756   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:12.822766   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:12.822792   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:12.822859   69333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-612261 san=[127.0.0.1 192.168.72.129 localhost minikube old-k8s-version-612261]
	I0927 01:41:13.054632   69333 provision.go:177] copyRemoteCerts
	I0927 01:41:13.054706   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:13.054740   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.057895   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058296   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.058329   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058478   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.058696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.058907   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.059062   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.146378   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:13.176435   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:41:13.208974   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 01:41:13.240179   69333 provision.go:87] duration metric: took 423.77487ms to configureAuth
	I0927 01:41:13.240211   69333 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:13.240412   69333 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:41:13.240498   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.243514   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.243963   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.243991   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.244174   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.244419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244641   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244838   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.245039   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.245263   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.245284   69333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:13.476519   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:13.476545   69333 machine.go:96] duration metric: took 1.000674334s to provisionDockerMachine
	I0927 01:41:13.476558   69333 start.go:293] postStartSetup for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:41:13.476574   69333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:13.476593   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.476914   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:13.476942   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.479326   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479662   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.479686   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479835   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.480027   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.480182   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.480337   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.563321   69333 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:13.567844   69333 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:13.567867   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:13.567929   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:13.568012   69333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:13.568109   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:13.578453   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:13.603888   69333 start.go:296] duration metric: took 127.316429ms for postStartSetup
	I0927 01:41:13.603924   69333 fix.go:56] duration metric: took 20.803606957s for fixHost
	I0927 01:41:13.603948   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.606500   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.606921   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.606949   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.607189   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.607419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607600   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607726   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.608048   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.608234   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.608245   69333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:13.708261   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401273.683707076
	
	I0927 01:41:13.708284   69333 fix.go:216] guest clock: 1727401273.683707076
	I0927 01:41:13.708293   69333 fix.go:229] Guest: 2024-09-27 01:41:13.683707076 +0000 UTC Remote: 2024-09-27 01:41:13.603929237 +0000 UTC m=+226.291347697 (delta=79.777839ms)
	I0927 01:41:13.708348   69333 fix.go:200] guest clock delta is within tolerance: 79.777839ms
	I0927 01:41:13.708357   69333 start.go:83] releasing machines lock for "old-k8s-version-612261", held for 20.90807118s
	I0927 01:41:13.708392   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.708665   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:13.711474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.711873   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.711905   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.712035   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712569   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712748   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712832   69333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:13.712878   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.712949   69333 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:13.712971   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.715681   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.715820   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716024   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716043   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716200   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716225   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716235   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716370   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716487   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716548   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716622   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716728   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716779   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.716859   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.826638   69333 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:13.832901   69333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:13.986132   69333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:13.992644   69333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:13.992728   69333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:14.008962   69333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:14.008991   69333 start.go:495] detecting cgroup driver to use...
	I0927 01:41:14.009051   69333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:14.025047   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:14.040807   69333 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:14.040857   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:14.055972   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:14.072654   69333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:14.210869   69333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:14.403536   69333 docker.go:233] disabling docker service ...
	I0927 01:41:14.403596   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:14.421549   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:14.436288   69333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:14.569634   69333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:14.701517   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:14.716794   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:14.740622   69333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:41:14.740685   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.756563   69333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:14.756626   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.768952   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.781314   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.793578   69333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:14.806302   69333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:14.822967   69333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:14.823036   69333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:14.837673   69333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:14.848486   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:14.988181   69333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:15.100581   69333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:15.100664   69333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:15.105816   69333 start.go:563] Will wait 60s for crictl version
	I0927 01:41:15.105883   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:15.110375   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:15.154944   69333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:15.155039   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.188172   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.220410   69333 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:41:11.033747   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:13.038930   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:15.035610   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:15.035636   69234 pod_ready.go:82] duration metric: took 6.008237321s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.035645   69234 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.221508   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:15.224474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.224855   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:15.224884   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.225126   69333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:15.229555   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:15.244862   69333 kubeadm.go:883] updating cluster {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:15.245007   69333 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:41:15.245070   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:15.298422   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:15.298501   69333 ssh_runner.go:195] Run: which lz4
	I0927 01:41:15.302771   69333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:15.307360   69333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:15.307398   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:41:17.053272   69333 crio.go:462] duration metric: took 1.750548806s to copy over tarball
	I0927 01:41:17.053354   69333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:13.732810   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Start
	I0927 01:41:13.732979   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring networks are active...
	I0927 01:41:13.733749   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network default is active
	I0927 01:41:13.734076   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network mk-default-k8s-diff-port-368295 is active
	I0927 01:41:13.734425   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Getting domain xml...
	I0927 01:41:13.734997   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Creating domain...
	I0927 01:41:15.073415   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting to get IP...
	I0927 01:41:15.074278   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074774   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074850   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.074757   70444 retry.go:31] will retry after 231.356774ms: waiting for machine to come up
	I0927 01:41:15.308474   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309058   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.308989   70444 retry.go:31] will retry after 252.762152ms: waiting for machine to come up
	I0927 01:41:15.563638   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564173   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564212   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.564130   70444 retry.go:31] will retry after 341.067908ms: waiting for machine to come up
	I0927 01:41:15.906735   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907138   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907168   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.907091   70444 retry.go:31] will retry after 385.816363ms: waiting for machine to come up
	I0927 01:41:16.294523   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295246   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295268   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.295192   70444 retry.go:31] will retry after 575.812339ms: waiting for machine to come up
	I0927 01:41:16.873050   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873574   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873601   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.873520   70444 retry.go:31] will retry after 661.914855ms: waiting for machine to come up
	I0927 01:41:17.537039   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537516   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537544   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:17.537467   70444 retry.go:31] will retry after 959.195147ms: waiting for machine to come up
	I0927 01:41:17.043983   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.543159   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:20.066231   69333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012846531s)
	I0927 01:41:20.066257   69333 crio.go:469] duration metric: took 3.012954388s to extract the tarball
	I0927 01:41:20.066265   69333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:20.112486   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:20.152620   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:20.152647   69333 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:20.152723   69333 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.152754   69333 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.152789   69333 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.152813   69333 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.152816   69333 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.152763   69333 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.152938   69333 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:41:20.152940   69333 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154747   69333 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.154752   69333 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.154886   69333 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.154925   69333 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.154930   69333 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.154934   69333 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:41:20.316172   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.316352   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:41:20.319986   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.331224   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.342010   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.355732   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.355739   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.446420   69333 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:41:20.446477   69333 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.446529   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.469134   69333 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:41:20.469183   69333 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.469231   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.470229   69333 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:41:20.470264   69333 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:41:20.470310   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.477952   69333 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:41:20.477991   69333 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.478034   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.519340   69333 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:41:20.519391   69333 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.519454   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538237   69333 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:41:20.538256   69333 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:41:20.538293   69333 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.538298   69333 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.538389   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.538438   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.538489   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656448   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.656508   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.656542   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.656573   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.656635   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.656704   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656740   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.818479   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.818494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.818581   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.878325   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:41:20.878480   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.878494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.878585   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.885061   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.885168   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.898628   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:41:20.994147   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:41:20.994175   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:41:20.994211   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:21.016210   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:41:21.016289   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:41:21.035051   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:41:21.374949   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:21.520726   69333 cache_images.go:92] duration metric: took 1.368058485s to LoadCachedImages
	W0927 01:41:21.520817   69333 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0927 01:41:21.520833   69333 kubeadm.go:934] updating node { 192.168.72.129 8443 v1.20.0 crio true true} ...
	I0927 01:41:21.520951   69333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-612261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:21.521035   69333 ssh_runner.go:195] Run: crio config
	I0927 01:41:21.571651   69333 cni.go:84] Creating CNI manager for ""
	I0927 01:41:21.571677   69333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:21.571688   69333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:21.571712   69333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-612261 NodeName:old-k8s-version-612261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:41:21.571882   69333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-612261"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:21.571958   69333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:41:21.582735   69333 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:21.582802   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:21.593329   69333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0927 01:41:21.615040   69333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:21.636564   69333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 01:41:21.657275   69333 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:21.661675   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:21.674587   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:21.814300   69333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:21.834133   69333 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261 for IP: 192.168.72.129
	I0927 01:41:21.834163   69333 certs.go:194] generating shared ca certs ...
	I0927 01:41:21.834182   69333 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:21.834380   69333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:21.834437   69333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:21.834450   69333 certs.go:256] generating profile certs ...
	I0927 01:41:21.834558   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key
	I0927 01:41:21.834630   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e
	I0927 01:41:21.834676   69333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key
	I0927 01:41:21.834819   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:21.834859   69333 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:21.834873   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:21.834904   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:21.834937   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:21.834973   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:21.835023   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:21.835864   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:21.866955   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:21.902991   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:21.928957   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:21.957505   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:41:21.984055   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:22.013191   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:22.041745   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:41:22.069680   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:22.104139   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:22.130348   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:22.157976   69333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:22.177818   69333 ssh_runner.go:195] Run: openssl version
	I0927 01:41:22.184389   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:22.196133   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201047   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201120   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.207245   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:22.219033   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:22.230331   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235000   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235054   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.240963   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:22.252022   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:22.263197   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268023   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268100   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.274086   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:22.285387   69333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:22.290487   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:22.296953   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:22.303095   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:22.310001   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:22.316346   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:22.322559   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:22.328931   69333 kubeadm.go:392] StartCluster: {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:22.329015   69333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:22.329081   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:18.498695   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499234   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:18.499187   70444 retry.go:31] will retry after 932.004828ms: waiting for machine to come up
	I0927 01:41:19.432487   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432885   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432912   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:19.432844   70444 retry.go:31] will retry after 1.595543978s: waiting for machine to come up
	I0927 01:41:21.030048   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030572   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030598   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:21.030526   70444 retry.go:31] will retry after 1.93010855s: waiting for machine to come up
	I0927 01:41:22.963833   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964303   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964334   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:22.964254   70444 retry.go:31] will retry after 2.81720725s: waiting for machine to come up
	I0927 01:41:21.757497   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:24.043965   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:22.368989   69333 cri.go:89] found id: ""
	I0927 01:41:22.369059   69333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:22.379818   69333 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:22.379841   69333 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:22.379897   69333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:22.392278   69333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:22.393236   69333 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:41:22.393856   69333 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-14935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-612261" cluster setting kubeconfig missing "old-k8s-version-612261" context setting]
	I0927 01:41:22.394733   69333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:22.404625   69333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:22.415376   69333 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.129
	I0927 01:41:22.415414   69333 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:22.415427   69333 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:22.415487   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:22.452749   69333 cri.go:89] found id: ""
	I0927 01:41:22.452829   69333 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:22.469164   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:22.480018   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:22.480038   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:22.480092   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:41:22.490501   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:22.490562   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:22.500330   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:41:22.509612   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:22.509681   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:22.520064   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.529864   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:22.529921   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.540563   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:41:22.556739   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:22.556797   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:22.572858   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:22.583366   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:22.709007   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.468461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.714890   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.865174   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.959048   69333 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:23.959140   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.460104   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.959462   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.460143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.959473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.460051   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.960121   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.784030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784429   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784456   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:25.784393   70444 retry.go:31] will retry after 2.844872797s: waiting for machine to come up
	I0927 01:41:26.544176   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:29.042297   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:27.459491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:27.959944   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.459636   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.959766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.459410   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.959439   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.460176   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.959810   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.459492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.959966   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.632445   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632905   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632930   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:28.632866   70444 retry.go:31] will retry after 3.566248996s: waiting for machine to come up
	I0927 01:41:32.200424   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200804   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Found IP for machine: 192.168.61.83
	I0927 01:41:32.200832   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has current primary IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200841   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserving static IP address...
	I0927 01:41:32.201137   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.201151   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserved static IP address: 192.168.61.83
	I0927 01:41:32.201164   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | skip adding static IP to network mk-default-k8s-diff-port-368295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"}
	I0927 01:41:32.201177   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Getting to WaitForSSH function...
	I0927 01:41:32.201185   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for SSH to be available...
	I0927 01:41:32.203258   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203542   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.203571   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203674   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH client type: external
	I0927 01:41:32.203704   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa (-rw-------)
	I0927 01:41:32.203743   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:32.203763   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | About to run SSH command:
	I0927 01:41:32.203783   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | exit 0
	I0927 01:41:32.327131   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:32.327499   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetConfigRaw
	I0927 01:41:32.328140   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.330387   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.330769   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.330801   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.331054   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:41:32.331257   69534 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:32.331279   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:32.331505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.333514   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333799   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.333825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333940   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.334101   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334267   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334359   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.334509   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.334700   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.334709   69534 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:32.439884   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:32.439921   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440126   69534 buildroot.go:166] provisioning hostname "default-k8s-diff-port-368295"
	I0927 01:41:32.440149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440346   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.443385   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443707   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.443742   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443917   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.444093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444266   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444427   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.444606   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.444793   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.444809   69534 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-368295 && echo "default-k8s-diff-port-368295" | sudo tee /etc/hostname
	I0927 01:41:32.570447   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-368295
	
	I0927 01:41:32.570479   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.573194   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573472   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.573512   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573699   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.573942   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574097   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.574430   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.574623   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.574647   69534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-368295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-368295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-368295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:32.693082   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:32.693107   69534 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:32.693140   69534 buildroot.go:174] setting up certificates
	I0927 01:41:32.693149   69534 provision.go:84] configureAuth start
	I0927 01:41:32.693160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.693407   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.696156   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696498   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.696522   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696693   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.698894   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699229   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.699257   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699399   69534 provision.go:143] copyHostCerts
	I0927 01:41:32.699451   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:32.699464   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:32.699530   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:32.699639   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:32.699653   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:32.699681   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:32.699751   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:32.699761   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:32.699785   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:32.699848   69534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-368295 san=[127.0.0.1 192.168.61.83 default-k8s-diff-port-368295 localhost minikube]
	I0927 01:41:32.887727   69534 provision.go:177] copyRemoteCerts
	I0927 01:41:32.887792   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:32.887825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.890435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890768   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.890797   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890956   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.891128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.891252   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.891373   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:32.973705   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:32.998434   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0927 01:41:33.023552   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:33.048884   69534 provision.go:87] duration metric: took 355.724209ms to configureAuth
	I0927 01:41:33.048910   69534 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:33.049080   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:33.049149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.051738   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052080   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.052133   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052364   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.052578   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052726   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.053031   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.053265   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.053283   69534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:33.292126   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:33.292148   69534 machine.go:96] duration metric: took 960.878234ms to provisionDockerMachine
	I0927 01:41:33.292159   69534 start.go:293] postStartSetup for "default-k8s-diff-port-368295" (driver="kvm2")
	I0927 01:41:33.292171   69534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:33.292188   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.292511   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:33.292539   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.295356   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.295759   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295936   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.296100   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.296314   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.296498   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.528391   68676 start.go:364] duration metric: took 56.042651871s to acquireMachinesLock for "no-preload-521072"
	I0927 01:41:33.528435   68676 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:33.528445   68676 fix.go:54] fixHost starting: 
	I0927 01:41:33.528858   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:33.528890   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:33.547391   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0927 01:41:33.547852   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:33.548343   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:41:33.548371   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:33.548713   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:33.548907   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:33.549064   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:41:33.550898   68676 fix.go:112] recreateIfNeeded on no-preload-521072: state=Stopped err=<nil>
	I0927 01:41:33.550923   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	W0927 01:41:33.551084   68676 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:33.553090   68676 out.go:177] * Restarting existing kvm2 VM for "no-preload-521072" ...
	I0927 01:41:33.554429   68676 main.go:141] libmachine: (no-preload-521072) Calling .Start
	I0927 01:41:33.554613   68676 main.go:141] libmachine: (no-preload-521072) Ensuring networks are active...
	I0927 01:41:33.555401   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network default is active
	I0927 01:41:33.555858   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network mk-no-preload-521072 is active
	I0927 01:41:33.556350   68676 main.go:141] libmachine: (no-preload-521072) Getting domain xml...
	I0927 01:41:33.557057   68676 main.go:141] libmachine: (no-preload-521072) Creating domain...
	I0927 01:41:34.830052   68676 main.go:141] libmachine: (no-preload-521072) Waiting to get IP...
	I0927 01:41:34.830807   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:34.831255   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:34.831340   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:34.831244   70637 retry.go:31] will retry after 267.615794ms: waiting for machine to come up
	I0927 01:41:33.378613   69534 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:33.383491   69534 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:33.383517   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:33.383590   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:33.383695   69534 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:33.383810   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:33.395134   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:33.420441   69534 start.go:296] duration metric: took 128.270045ms for postStartSetup
	I0927 01:41:33.420481   69534 fix.go:56] duration metric: took 19.711948387s for fixHost
	I0927 01:41:33.420505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.422860   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423170   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.423198   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423333   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.423517   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423676   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.423987   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.424139   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.424153   69534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:33.528250   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401293.484458762
	
	I0927 01:41:33.528271   69534 fix.go:216] guest clock: 1727401293.484458762
	I0927 01:41:33.528278   69534 fix.go:229] Guest: 2024-09-27 01:41:33.484458762 +0000 UTC Remote: 2024-09-27 01:41:33.420486926 +0000 UTC m=+225.118319167 (delta=63.971836ms)
	I0927 01:41:33.528297   69534 fix.go:200] guest clock delta is within tolerance: 63.971836ms
	I0927 01:41:33.528303   69534 start.go:83] releasing machines lock for "default-k8s-diff-port-368295", held for 19.819799777s
	I0927 01:41:33.528328   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.528623   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:33.531282   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531692   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.531724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531914   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532476   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532651   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532742   69534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:33.532784   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.532868   69534 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:33.532890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.535432   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535710   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.535843   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.536153   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536195   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536351   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536367   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536513   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536508   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.536634   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536815   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.644679   69534 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:33.652386   69534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:33.803821   69534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:33.810620   69534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:33.810678   69534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:33.826938   69534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:33.826963   69534 start.go:495] detecting cgroup driver to use...
	I0927 01:41:33.827028   69534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:33.844572   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:33.859851   69534 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:33.859916   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:33.874262   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:33.888460   69534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:34.011008   69534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:34.161761   69534 docker.go:233] disabling docker service ...
	I0927 01:41:34.161855   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:34.180621   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:34.198472   69534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:34.340892   69534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:34.483708   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:34.498745   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:34.518957   69534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:34.519026   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.530123   69534 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:34.530172   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.545035   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.555944   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.566852   69534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:34.577676   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.589078   69534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.608131   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.619482   69534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:34.629119   69534 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:34.629180   69534 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:34.643997   69534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:34.656396   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:34.791856   69534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:34.884774   69534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:34.884831   69534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:34.889590   69534 start.go:563] Will wait 60s for crictl version
	I0927 01:41:34.889633   69534 ssh_runner.go:195] Run: which crictl
	I0927 01:41:34.893330   69534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:34.930031   69534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:34.930141   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.960912   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.996060   69534 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:31.542525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.546389   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:32.459727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:32.959527   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.459351   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.959903   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.459444   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.959423   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.459435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.460148   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.959874   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.997457   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:35.000691   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001081   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:35.001127   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001322   69534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:35.006115   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:35.019817   69534 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:35.019983   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:35.020045   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:35.062533   69534 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:35.062595   69534 ssh_runner.go:195] Run: which lz4
	I0927 01:41:35.066897   69534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:35.071178   69534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:35.071216   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:41:36.563774   69534 crio.go:462] duration metric: took 1.496913722s to copy over tarball
	I0927 01:41:36.563866   69534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:35.100818   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.101327   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.101354   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.101290   70637 retry.go:31] will retry after 244.193758ms: waiting for machine to come up
	I0927 01:41:35.347021   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.347674   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.347714   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.347650   70637 retry.go:31] will retry after 361.672884ms: waiting for machine to come up
	I0927 01:41:35.711206   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.711755   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.711788   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.711730   70637 retry.go:31] will retry after 406.084841ms: waiting for machine to come up
	I0927 01:41:36.119494   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.120026   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.120067   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.119978   70637 retry.go:31] will retry after 497.966133ms: waiting for machine to come up
	I0927 01:41:36.619859   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.620400   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.620428   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.620362   70637 retry.go:31] will retry after 765.975603ms: waiting for machine to come up
	I0927 01:41:37.387821   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:37.388502   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:37.388537   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:37.388453   70637 retry.go:31] will retry after 828.567445ms: waiting for machine to come up
	I0927 01:41:38.218462   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:38.218940   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:38.218974   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:38.218803   70637 retry.go:31] will retry after 1.269155563s: waiting for machine to come up
	I0927 01:41:39.489076   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:39.489557   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:39.489583   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:39.489514   70637 retry.go:31] will retry after 1.666481574s: waiting for machine to come up
	I0927 01:41:35.554859   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:38.043285   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:40.542499   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:37.459766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:37.959594   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.459971   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.960093   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.459983   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.959812   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.460220   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.959253   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.459829   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.959864   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.667451   69534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10354947s)
	I0927 01:41:38.667477   69534 crio.go:469] duration metric: took 2.103669113s to extract the tarball
	I0927 01:41:38.667487   69534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:38.704217   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:38.747162   69534 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:41:38.747187   69534 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:41:38.747197   69534 kubeadm.go:934] updating node { 192.168.61.83 8444 v1.31.1 crio true true} ...
	I0927 01:41:38.747323   69534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-368295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:38.747406   69534 ssh_runner.go:195] Run: crio config
	I0927 01:41:38.796481   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:38.796510   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:38.796522   69534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:38.796549   69534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-368295 NodeName:default-k8s-diff-port-368295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:41:38.796726   69534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-368295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:38.796806   69534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:41:38.807445   69534 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:38.807513   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:38.817368   69534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0927 01:41:38.834181   69534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:38.851650   69534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0927 01:41:38.869822   69534 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:38.873868   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:38.886422   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:39.022075   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:39.038948   69534 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295 for IP: 192.168.61.83
	I0927 01:41:39.038982   69534 certs.go:194] generating shared ca certs ...
	I0927 01:41:39.039004   69534 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:39.039174   69534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:39.039241   69534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:39.039253   69534 certs.go:256] generating profile certs ...
	I0927 01:41:39.039402   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.key
	I0927 01:41:39.039490   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key.2edc0267
	I0927 01:41:39.039549   69534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key
	I0927 01:41:39.039701   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:39.039773   69534 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:39.039789   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:39.039825   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:39.039860   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:39.039889   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:39.039950   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:39.040814   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:39.080130   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:39.133365   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:39.169238   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:39.196619   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 01:41:39.227667   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:39.255240   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:39.280602   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:41:39.305695   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:39.329559   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:39.358555   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:39.387030   69534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:39.404111   69534 ssh_runner.go:195] Run: openssl version
	I0927 01:41:39.409879   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:39.420542   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425094   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425151   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.431225   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:39.442237   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:39.453229   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458040   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458110   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.464177   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:39.475582   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:39.486911   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491843   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491898   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.497653   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:39.508039   69534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:39.512597   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:39.518557   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:39.524475   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:39.530616   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:39.536820   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:39.543487   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:39.549791   69534 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:39.549880   69534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:39.549945   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.594178   69534 cri.go:89] found id: ""
	I0927 01:41:39.594256   69534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:39.605173   69534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:39.605195   69534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:39.605261   69534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:39.615543   69534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:39.616639   69534 kubeconfig.go:125] found "default-k8s-diff-port-368295" server: "https://192.168.61.83:8444"
	I0927 01:41:39.618793   69534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:39.628422   69534 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.83
	I0927 01:41:39.628454   69534 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:39.628465   69534 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:39.628566   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.673513   69534 cri.go:89] found id: ""
	I0927 01:41:39.673592   69534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:39.690296   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:39.699800   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:39.699821   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:39.699876   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:41:39.709235   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:39.709294   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:39.719012   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:41:39.728197   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:39.728262   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:39.737520   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.746592   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:39.746653   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.756251   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:41:39.765026   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:39.765090   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:39.774937   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:39.784588   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:39.893259   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.625162   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.954926   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.025693   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.101915   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:41.102006   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.602856   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.102942   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.602371   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.620056   69534 api_server.go:72] duration metric: took 1.518136259s to wait for apiserver process to appear ...
	I0927 01:41:42.620085   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:42.620107   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:41.157254   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:41.157789   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:41.157817   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:41.157738   70637 retry.go:31] will retry after 1.495421187s: waiting for machine to come up
	I0927 01:41:42.655326   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:42.655826   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:42.655853   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:42.655771   70637 retry.go:31] will retry after 2.80191937s: waiting for machine to come up
	I0927 01:41:42.543732   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.043009   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.040496   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.040525   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.040542   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.079569   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.079602   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.120702   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.126461   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.126488   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.621130   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.629533   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:45.629569   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.121189   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.130806   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:46.130842   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.620334   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.625456   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:41:46.636549   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:46.636581   69534 api_server.go:131] duration metric: took 4.016488114s to wait for apiserver health ...
	I0927 01:41:46.636591   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:46.636599   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:46.638016   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:42.459806   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.960200   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.459511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.959467   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.459352   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.960147   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.459637   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.959535   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.459585   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.959579   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.639222   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:46.651680   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:46.671366   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:46.684702   69534 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:46.684740   69534 system_pods.go:61] "coredns-7c65d6cfc9-xtgdx" [6a5f97bd-0fbb-4220-a763-bb8ca6fab439] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:46.684752   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [2dbd4866-89f2-4a0c-ab8a-671ff0237bf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:46.684761   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [62865280-e996-45a9-a872-766e09d5b91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:46.684774   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [b0d06bec-2f5a-46e4-9d2d-b2ea7cdc7968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:46.684781   69534 system_pods.go:61] "kube-proxy-xm2p8" [449495d5-a476-4abf-b6be-301b9ead92e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 01:41:46.684793   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [71dadb93-c535-4ce3-8dd7-ffd4496bf0e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:46.684801   69534 system_pods.go:61] "metrics-server-6867b74b74-n9nsg" [fefb6977-44af-41f8-8a82-1dcd76374ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:46.684811   69534 system_pods.go:61] "storage-provisioner" [78bd924c-1d70-4eb6-9e2c-0e21ebc523dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 01:41:46.684818   69534 system_pods.go:74] duration metric: took 13.431978ms to wait for pod list to return data ...
	I0927 01:41:46.684830   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:46.690309   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:46.690343   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:46.690358   69534 node_conditions.go:105] duration metric: took 5.522911ms to run NodePressure ...
	I0927 01:41:46.690379   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:46.964511   69534 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971731   69534 kubeadm.go:739] kubelet initialised
	I0927 01:41:46.971751   69534 kubeadm.go:740] duration metric: took 7.215476ms waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971760   69534 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:46.978192   69534 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:45.459706   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:45.460242   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:45.460265   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:45.460161   70637 retry.go:31] will retry after 3.051133432s: waiting for machine to come up
	I0927 01:41:48.512758   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:48.513180   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:48.513208   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:48.513118   70637 retry.go:31] will retry after 3.478053984s: waiting for machine to come up
	I0927 01:41:47.544064   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:50.042360   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.459645   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:47.959756   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.460088   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.959526   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.960102   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.460203   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.960225   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.460182   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.959343   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.985840   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:51.506449   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.484646   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:52.484672   69534 pod_ready.go:82] duration metric: took 5.506454681s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:52.484685   69534 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:51.994746   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995201   68676 main.go:141] libmachine: (no-preload-521072) Found IP for machine: 192.168.50.246
	I0927 01:41:51.995219   68676 main.go:141] libmachine: (no-preload-521072) Reserving static IP address...
	I0927 01:41:51.995230   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has current primary IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.995677   68676 main.go:141] libmachine: (no-preload-521072) Reserved static IP address: 192.168.50.246
	I0927 01:41:51.995695   68676 main.go:141] libmachine: (no-preload-521072) DBG | skip adding static IP to network mk-no-preload-521072 - found existing host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"}
	I0927 01:41:51.995713   68676 main.go:141] libmachine: (no-preload-521072) DBG | Getting to WaitForSSH function...
	I0927 01:41:51.995727   68676 main.go:141] libmachine: (no-preload-521072) Waiting for SSH to be available...
	I0927 01:41:51.998245   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998590   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.998616   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998748   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH client type: external
	I0927 01:41:51.998810   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa (-rw-------)
	I0927 01:41:51.998850   68676 main.go:141] libmachine: (no-preload-521072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:51.998866   68676 main.go:141] libmachine: (no-preload-521072) DBG | About to run SSH command:
	I0927 01:41:51.998877   68676 main.go:141] libmachine: (no-preload-521072) DBG | exit 0
	I0927 01:41:52.131754   68676 main.go:141] libmachine: (no-preload-521072) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:52.132117   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetConfigRaw
	I0927 01:41:52.132724   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.135236   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135588   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.135615   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135866   68676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/config.json ...
	I0927 01:41:52.136059   68676 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:52.136078   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:52.136300   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.138644   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139009   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.139035   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139215   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.139406   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139602   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139760   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.139931   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.140139   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.140151   68676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:52.255655   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:52.255690   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.255952   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:41:52.255968   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.256122   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.258599   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.258963   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.258994   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.259108   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.259322   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259494   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259676   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.259835   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.260008   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.260023   68676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-521072 && echo "no-preload-521072" | sudo tee /etc/hostname
	I0927 01:41:52.405255   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-521072
	
	I0927 01:41:52.405314   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.408593   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.408927   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.408973   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.409346   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.409591   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409786   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409940   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.410094   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.410331   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.410356   68676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521072/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:52.538244   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:52.538276   68676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:52.538321   68676 buildroot.go:174] setting up certificates
	I0927 01:41:52.538335   68676 provision.go:84] configureAuth start
	I0927 01:41:52.538350   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.538644   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.541913   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542334   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.542372   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542540   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.544773   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545127   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.545163   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545357   68676 provision.go:143] copyHostCerts
	I0927 01:41:52.545415   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:52.545427   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:52.545496   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:52.545614   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:52.545624   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:52.545655   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:52.545732   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:52.545742   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:52.545768   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:52.545834   68676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.no-preload-521072 san=[127.0.0.1 192.168.50.246 localhost minikube no-preload-521072]
	I0927 01:41:52.738375   68676 provision.go:177] copyRemoteCerts
	I0927 01:41:52.738434   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:52.738459   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.741146   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741439   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.741456   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741630   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.741828   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.741961   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.742086   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:52.830330   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:52.854664   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:41:52.879246   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:52.902734   68676 provision.go:87] duration metric: took 364.385528ms to configureAuth
	I0927 01:41:52.902782   68676 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:52.903017   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:52.903109   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.906143   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906495   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.906526   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906699   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.906917   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907086   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907211   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.907426   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.907625   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.907640   68676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:53.162936   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:53.162960   68676 machine.go:96] duration metric: took 1.026891152s to provisionDockerMachine
	I0927 01:41:53.162971   68676 start.go:293] postStartSetup for "no-preload-521072" (driver="kvm2")
	I0927 01:41:53.162980   68676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:53.162994   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.163325   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:53.163360   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.166007   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166478   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.166516   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166726   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.166919   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.167103   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.167253   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.254620   68676 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:53.259139   68676 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:53.259160   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:53.259236   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:53.259341   68676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:53.259465   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:53.269711   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:53.294563   68676 start.go:296] duration metric: took 131.58032ms for postStartSetup
	I0927 01:41:53.294602   68676 fix.go:56] duration metric: took 19.766156729s for fixHost
	I0927 01:41:53.294626   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.297597   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.297897   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.297928   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.298092   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.298275   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298632   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.298821   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:53.298997   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:53.299010   68676 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:53.416459   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401313.370238189
	
	I0927 01:41:53.416488   68676 fix.go:216] guest clock: 1727401313.370238189
	I0927 01:41:53.416497   68676 fix.go:229] Guest: 2024-09-27 01:41:53.370238189 +0000 UTC Remote: 2024-09-27 01:41:53.294607439 +0000 UTC m=+358.400757430 (delta=75.63075ms)
	I0927 01:41:53.416521   68676 fix.go:200] guest clock delta is within tolerance: 75.63075ms
	I0927 01:41:53.416542   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 19.888127741s
	I0927 01:41:53.416581   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.416835   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:53.419800   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420124   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.420153   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420309   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420730   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420905   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420988   68676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:53.421036   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.421126   68676 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:53.421148   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.423529   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423882   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423916   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.423937   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424023   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424308   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424365   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.424412   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424464   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.424567   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424701   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424838   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424990   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.527586   68676 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:53.533685   68676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:53.680850   68676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:53.686769   68676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:53.686831   68676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:53.702686   68676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:53.702709   68676 start.go:495] detecting cgroup driver to use...
	I0927 01:41:53.702787   68676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:53.720756   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:53.736843   68676 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:53.736920   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:53.752063   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:53.768140   68676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:53.890040   68676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:54.044033   68676 docker.go:233] disabling docker service ...
	I0927 01:41:54.044100   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:54.060061   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:54.073201   68676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:54.225559   68676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:54.367269   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:54.381517   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:54.401099   68676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:54.401164   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.412620   68676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:54.412687   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.425942   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.437451   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.449115   68676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:54.460383   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.471393   68676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.489649   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.500699   68676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:54.511012   68676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:54.511061   68676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:54.524738   68676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:54.535353   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:54.672416   68676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:54.763423   68676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:54.763506   68676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:54.768758   68676 start.go:563] Will wait 60s for crictl version
	I0927 01:41:54.768823   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:54.772980   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:54.814375   68676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:54.814460   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.844002   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.876692   68676 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:54.877765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:54.880320   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.880817   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:54.880852   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.881008   68676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:54.885225   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:54.897661   68676 kubeadm.go:883] updating cluster {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:54.897768   68676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:54.897810   68676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:52.542326   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.543472   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.459589   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:52.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.459448   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.960120   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.460016   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.959681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.959819   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.459221   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.959296   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.491390   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.997932   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.937979   68676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:54.938000   68676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:54.938055   68676 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.938103   68676 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.938124   68676 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.938101   68676 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.938180   68676 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:54.938069   68676 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939611   68676 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.939853   68676 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.939867   68676 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.939872   68676 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.939875   68676 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.939868   68676 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939932   68676 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0927 01:41:54.939954   68676 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.100149   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.104432   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.122220   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0927 01:41:55.146745   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.148808   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.159749   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.194662   68676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0927 01:41:55.194710   68676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.194764   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.218262   68676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0927 01:41:55.218302   68676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.218348   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.275530   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339428   68676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0927 01:41:55.339476   68676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.339488   68676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0927 01:41:55.339526   68676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.339554   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339558   68676 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0927 01:41:55.339569   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339573   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.339584   68676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.339619   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339625   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.339689   68676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0927 01:41:55.339733   68676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339772   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.392986   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.393033   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.403596   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.403658   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.403601   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.404180   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.528983   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.529008   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.529013   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.556122   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.556146   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.559222   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.668914   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0927 01:41:55.669041   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.671951   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.672026   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.675810   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0927 01:41:55.675854   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.675883   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.675910   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:55.687199   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0927 01:41:55.687234   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.687294   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.766777   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0927 01:41:55.766775   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0927 01:41:55.766894   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:41:55.766901   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:41:55.776811   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0927 01:41:55.776824   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0927 01:41:55.776933   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0927 01:41:55.777033   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:55.776938   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:41:56.125882   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825382   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.048325373s)
	I0927 01:41:57.825460   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0927 01:41:57.825396   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.048309349s)
	I0927 01:41:57.825483   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0927 01:41:57.825401   68676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.699485021s)
	I0927 01:41:57.825517   68676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0927 01:41:57.825520   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.138185505s)
	I0927 01:41:57.825540   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0927 01:41:57.825548   68676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825411   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.058505151s)
	I0927 01:41:57.825566   68676 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:57.825573   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0927 01:41:57.825414   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.058497946s)
	I0927 01:41:57.825584   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0927 01:41:57.825596   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:57.825613   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:59.788391   68676 ssh_runner.go:235] Completed: which crictl: (1.962775321s)
	I0927 01:41:59.788412   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.962779963s)
	I0927 01:41:59.788429   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0927 01:41:59.788457   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:59.788462   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:59.788499   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:57.043267   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.542589   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:57.459172   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:57.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.459323   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.960219   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.459916   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.959858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.460249   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.959246   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.459839   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.959224   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.490443   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.992727   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.992753   69534 pod_ready.go:82] duration metric: took 7.508057707s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.992766   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998326   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.998357   69534 pod_ready.go:82] duration metric: took 5.584215ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998372   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003176   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.003197   69534 pod_ready.go:82] duration metric: took 4.816939ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003209   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009089   69534 pod_ready.go:93] pod "kube-proxy-xm2p8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.009110   69534 pod_ready.go:82] duration metric: took 5.893939ms for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009119   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014172   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.014197   69534 pod_ready.go:82] duration metric: took 5.072107ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014209   69534 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:02.021372   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.758278   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969794291s)
	I0927 01:42:01.758369   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:01.758392   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.969869427s)
	I0927 01:42:01.758415   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0927 01:42:01.758445   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.758494   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.796910   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:03.934871   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.176354046s)
	I0927 01:42:03.934903   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0927 01:42:03.934921   68676 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.934927   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.137986898s)
	I0927 01:42:03.934972   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 01:42:03.934994   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.935050   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:03.939942   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0927 01:42:02.042617   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:04.042848   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:02.460232   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:02.959635   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.459610   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.959412   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.459857   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.959495   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.459972   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.959931   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.459460   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.959627   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.021759   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:06.521921   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.308972   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373952677s)
	I0927 01:42:07.308999   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0927 01:42:07.309024   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:07.309070   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:09.378517   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.06942074s)
	I0927 01:42:09.378550   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0927 01:42:09.378579   68676 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:09.378629   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:06.546731   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:09.044481   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.459395   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:07.959574   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.460234   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.959281   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.459240   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.959429   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.459865   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.959431   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.459459   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.020456   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:11.021689   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:10.030049   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 01:42:10.030100   68676 cache_images.go:123] Successfully loaded all cached images
	I0927 01:42:10.030106   68676 cache_images.go:92] duration metric: took 15.09209404s to LoadCachedImages
	I0927 01:42:10.030118   68676 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.31.1 crio true true} ...
	I0927 01:42:10.030211   68676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-521072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:42:10.030273   68676 ssh_runner.go:195] Run: crio config
	I0927 01:42:10.078318   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:10.078342   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:10.078351   68676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:42:10.078370   68676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521072 NodeName:no-preload-521072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:42:10.078506   68676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521072"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:42:10.078580   68676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:42:10.089137   68676 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:42:10.089212   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:42:10.098310   68676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 01:42:10.116172   68676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:42:10.134642   68676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0927 01:42:10.152442   68676 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0927 01:42:10.156477   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:42:10.169007   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:10.288382   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:10.306047   68676 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072 for IP: 192.168.50.246
	I0927 01:42:10.306077   68676 certs.go:194] generating shared ca certs ...
	I0927 01:42:10.306096   68676 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:10.306276   68676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:42:10.306331   68676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:42:10.306350   68676 certs.go:256] generating profile certs ...
	I0927 01:42:10.306453   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/client.key
	I0927 01:42:10.306553   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key.735097eb
	I0927 01:42:10.306613   68676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key
	I0927 01:42:10.306761   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:42:10.306797   68676 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:42:10.306808   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:42:10.306833   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:42:10.306854   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:42:10.306878   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:42:10.306916   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:42:10.307598   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:42:10.344570   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:42:10.386834   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:42:10.432022   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:42:10.462348   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:42:10.490015   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:42:10.518144   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:42:10.545290   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:42:10.572460   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:42:10.597526   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:42:10.622287   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:42:10.646020   68676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:42:10.662972   68676 ssh_runner.go:195] Run: openssl version
	I0927 01:42:10.668844   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:42:10.680020   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684620   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684678   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.690694   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:42:10.702115   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:42:10.713424   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717918   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717971   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.723601   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:42:10.734870   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:42:10.747370   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752016   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752072   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.757964   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:42:10.769560   68676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:42:10.774457   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:42:10.780719   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:42:10.786653   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:42:10.792671   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:42:10.798674   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:42:10.804910   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:42:10.811007   68676 kubeadm.go:392] StartCluster: {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:42:10.811114   68676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:42:10.811178   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.851017   68676 cri.go:89] found id: ""
	I0927 01:42:10.851084   68676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:42:10.864997   68676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:42:10.865016   68676 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:42:10.865062   68676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:42:10.877088   68676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:42:10.878133   68676 kubeconfig.go:125] found "no-preload-521072" server: "https://192.168.50.246:8443"
	I0927 01:42:10.880637   68676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:42:10.893554   68676 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.246
	I0927 01:42:10.893578   68676 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:42:10.893592   68676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:42:10.893629   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.935734   68676 cri.go:89] found id: ""
	I0927 01:42:10.935794   68676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:42:10.954141   68676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:42:10.965345   68676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:42:10.965363   68676 kubeadm.go:157] found existing configuration files:
	
	I0927 01:42:10.965413   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:42:10.975561   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:42:10.975628   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:42:10.985747   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:42:10.995026   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:42:10.995089   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:42:11.006650   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.016964   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:42:11.017034   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.028756   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:42:11.039002   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:42:11.039072   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:42:11.050382   68676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:42:11.060839   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:11.177447   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.481118   68676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.303633907s)
	I0927 01:42:12.481149   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.706344   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.774938   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.866467   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:42:12.866552   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.366860   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.866951   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.882411   68676 api_server.go:72] duration metric: took 1.015943274s to wait for apiserver process to appear ...
	I0927 01:42:13.882435   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:42:13.882457   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:13.882963   68676 api_server.go:269] stopped: https://192.168.50.246:8443/healthz: Get "https://192.168.50.246:8443/healthz": dial tcp 192.168.50.246:8443: connect: connection refused
	I0927 01:42:14.382489   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:11.543818   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:14.042536   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:12.459771   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:12.959727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.459428   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.959255   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.460003   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.959853   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.460237   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.959974   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.459420   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.959321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.527793   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:16.023080   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.124839   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:42:17.124867   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:42:17.124885   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.174869   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.174905   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.383128   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.389594   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.389629   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.883197   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.888706   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.888734   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.382982   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.387847   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.387877   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.882844   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.887144   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.887178   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.382711   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.388007   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.388037   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.882613   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.886781   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.886801   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:20.382907   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:20.387083   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:42:20.393697   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:42:20.393725   68676 api_server.go:131] duration metric: took 6.511280572s to wait for apiserver health ...
	I0927 01:42:20.393735   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:20.393743   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:20.395270   68676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:42:16.543525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:19.041726   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:20.396770   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:42:20.407891   68676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:42:20.427815   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:42:20.436940   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:42:20.436980   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:42:20.436989   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:42:20.436997   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:42:20.437005   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:42:20.437012   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:42:20.437020   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:42:20.437032   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:42:20.437038   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:42:20.437049   68676 system_pods.go:74] duration metric: took 9.213874ms to wait for pod list to return data ...
	I0927 01:42:20.437057   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:42:20.440323   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:42:20.440345   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:42:20.440356   68676 node_conditions.go:105] duration metric: took 3.294768ms to run NodePressure ...
	I0927 01:42:20.440372   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:20.710186   68676 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713940   68676 kubeadm.go:739] kubelet initialised
	I0927 01:42:20.713958   68676 kubeadm.go:740] duration metric: took 3.749241ms waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713965   68676 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:20.718807   68676 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.722955   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722976   68676 pod_ready.go:82] duration metric: took 4.147896ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.722984   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722991   68676 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.727569   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727596   68676 pod_ready.go:82] duration metric: took 4.598426ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.727604   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727611   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.731845   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731871   68676 pod_ready.go:82] duration metric: took 4.25326ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.731881   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731889   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.830881   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830909   68676 pod_ready.go:82] duration metric: took 99.009569ms for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.830918   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830923   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.232434   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232463   68676 pod_ready.go:82] duration metric: took 401.530413ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.232473   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232485   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.630791   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630818   68676 pod_ready.go:82] duration metric: took 398.325039ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.630829   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630836   68676 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:22.032173   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032200   68676 pod_ready.go:82] duration metric: took 401.353533ms for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:22.032208   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032215   68676 pod_ready.go:39] duration metric: took 1.318241972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:22.032233   68676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:42:22.046872   68676 ops.go:34] apiserver oom_adj: -16
	I0927 01:42:22.046898   68676 kubeadm.go:597] duration metric: took 11.181875532s to restartPrimaryControlPlane
	I0927 01:42:22.046908   68676 kubeadm.go:394] duration metric: took 11.235909243s to StartCluster
	I0927 01:42:22.046923   68676 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.046984   68676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:42:22.048611   68676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.048864   68676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:42:22.048932   68676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:42:22.049029   68676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-521072"
	I0927 01:42:22.049050   68676 addons.go:234] Setting addon storage-provisioner=true in "no-preload-521072"
	W0927 01:42:22.049060   68676 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:42:22.049066   68676 addons.go:69] Setting default-storageclass=true in profile "no-preload-521072"
	I0927 01:42:22.049088   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049092   68676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521072"
	I0927 01:42:22.049096   68676 addons.go:69] Setting metrics-server=true in profile "no-preload-521072"
	I0927 01:42:22.049117   68676 addons.go:234] Setting addon metrics-server=true in "no-preload-521072"
	I0927 01:42:22.049123   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0927 01:42:22.049134   68676 addons.go:243] addon metrics-server should already be in state true
	I0927 01:42:22.049167   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049423   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049455   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049478   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049507   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049535   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049555   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.050564   68676 out.go:177] * Verifying Kubernetes components...
	I0927 01:42:22.051717   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:22.088020   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0927 01:42:22.088454   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.088964   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.088985   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.089333   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.089793   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.089825   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.091735   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I0927 01:42:22.091853   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0927 01:42:22.092236   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092295   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092659   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092677   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.092817   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092840   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.093170   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093344   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093387   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.093922   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.093949   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.097310   68676 addons.go:234] Setting addon default-storageclass=true in "no-preload-521072"
	W0927 01:42:22.097333   68676 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:42:22.097368   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.097705   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.097747   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.110628   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0927 01:42:22.111053   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.111604   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.111629   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.112113   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.112329   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.113354   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0927 01:42:22.114009   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.114749   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.115666   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.115690   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.116105   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.116374   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.116862   68676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:42:22.118124   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.118135   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:42:22.118162   68676 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:42:22.118180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.119866   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0927 01:42:22.120317   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.120908   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.120931   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.121113   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.121319   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.121556   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.121576   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.122025   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.122051   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.122280   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.122487   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.122652   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.122781   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.126076   68676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:17.459443   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:17.959426   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.460250   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.959989   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.459981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.959969   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.459758   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.959440   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.460115   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.959238   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.521751   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:21.020226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.021393   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.127430   68676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.127446   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:42:22.127460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.130498   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131040   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.131061   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131357   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.131544   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.131670   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.131997   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.138657   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0927 01:42:22.138987   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.139420   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.139438   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.139824   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.139998   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.141454   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.141664   68676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.141673   68676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:42:22.141683   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.144221   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.144670   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.144931   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.145071   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.145208   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.244289   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:22.261345   68676 node_ready.go:35] waiting up to 6m0s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:22.365923   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:42:22.365953   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:42:22.387392   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.389353   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:42:22.389379   68676 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:42:22.406994   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.491559   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:22.491581   68676 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:42:22.586476   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:23.660676   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273241029s)
	I0927 01:42:23.660733   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660750   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660732   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.253706672s)
	I0927 01:42:23.660831   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660841   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660851   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.074315804s)
	I0927 01:42:23.661081   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661098   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661109   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661108   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661118   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661153   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661205   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661161   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661223   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661230   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661238   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661125   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661607   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661608   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661621   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661631   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661632   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661637   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661641   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661645   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661649   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661650   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661653   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661852   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661866   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661874   68676 addons.go:475] Verifying addon metrics-server=true in "no-preload-521072"
	I0927 01:42:23.661917   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.668484   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.668499   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.668711   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.668726   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.668743   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.670758   68676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0927 01:42:23.672072   68676 addons.go:510] duration metric: took 1.62313879s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0927 01:42:24.265426   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.043831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:25.546335   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.460161   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:22.959177   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.459481   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.959221   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:23.959322   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:24.004970   69333 cri.go:89] found id: ""
	I0927 01:42:24.004999   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.005010   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:24.005017   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:24.005076   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:24.041880   69333 cri.go:89] found id: ""
	I0927 01:42:24.041908   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.041919   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:24.041926   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:24.041991   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:24.082295   69333 cri.go:89] found id: ""
	I0927 01:42:24.082318   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.082325   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:24.082331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:24.082385   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:24.119663   69333 cri.go:89] found id: ""
	I0927 01:42:24.119692   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.119707   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:24.119714   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:24.119771   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:24.163893   69333 cri.go:89] found id: ""
	I0927 01:42:24.163920   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.163932   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:24.163940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:24.163999   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:24.200277   69333 cri.go:89] found id: ""
	I0927 01:42:24.200299   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.200307   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:24.200312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:24.200365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:24.235039   69333 cri.go:89] found id: ""
	I0927 01:42:24.235059   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.235066   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:24.235072   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:24.235132   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:24.275160   69333 cri.go:89] found id: ""
	I0927 01:42:24.275181   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.275188   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:24.275196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:24.275206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:24.327432   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:24.327465   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:24.341113   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:24.341139   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:24.473741   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:24.473764   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:24.473779   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:24.545888   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:24.545923   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:27.086673   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:27.100552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:27.100623   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:27.136182   69333 cri.go:89] found id: ""
	I0927 01:42:27.136207   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.136215   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:27.136221   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:27.136289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:27.173258   69333 cri.go:89] found id: ""
	I0927 01:42:27.173285   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.173296   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:27.173303   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:27.173373   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:27.210481   69333 cri.go:89] found id: ""
	I0927 01:42:27.210514   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.210526   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:27.210533   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:27.210586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:27.245168   69333 cri.go:89] found id: ""
	I0927 01:42:27.245192   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.245200   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:27.245206   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:27.245252   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:27.280494   69333 cri.go:89] found id: ""
	I0927 01:42:27.280522   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.280531   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:27.280538   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:27.280596   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:27.314281   69333 cri.go:89] found id: ""
	I0927 01:42:27.314307   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.314316   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:27.314322   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:27.314392   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:25.521413   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.019989   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:26.764721   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:27.765574   68676 node_ready.go:49] node "no-preload-521072" has status "Ready":"True"
	I0927 01:42:27.765597   68676 node_ready.go:38] duration metric: took 5.504217374s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:27.765609   68676 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:27.772263   68676 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777521   68676 pod_ready.go:93] pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.777544   68676 pod_ready.go:82] duration metric: took 5.252259ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777552   68676 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781511   68676 pod_ready.go:93] pod "etcd-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.781528   68676 pod_ready.go:82] duration metric: took 3.970559ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781535   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785556   68676 pod_ready.go:93] pod "kube-apiserver-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.785572   68676 pod_ready.go:82] duration metric: took 4.032023ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785579   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:29.792899   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.041166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:30.041766   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:27.350838   69333 cri.go:89] found id: ""
	I0927 01:42:27.350861   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.350869   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:27.350874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:27.350921   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:27.390146   69333 cri.go:89] found id: ""
	I0927 01:42:27.390175   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.390186   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:27.390196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:27.390206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:27.446727   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:27.446756   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:27.461337   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:27.461365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:27.533818   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:27.533839   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:27.533874   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:27.614325   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:27.614357   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.161303   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:30.179521   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:30.179590   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:30.221738   69333 cri.go:89] found id: ""
	I0927 01:42:30.221764   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.221772   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:30.221778   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:30.221841   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:30.258316   69333 cri.go:89] found id: ""
	I0927 01:42:30.258349   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.258359   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:30.258369   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:30.258427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:30.297079   69333 cri.go:89] found id: ""
	I0927 01:42:30.297102   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.297109   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:30.297114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:30.297159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:30.337969   69333 cri.go:89] found id: ""
	I0927 01:42:30.337995   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.338007   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:30.338014   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:30.338075   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:30.375946   69333 cri.go:89] found id: ""
	I0927 01:42:30.375975   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.375986   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:30.375993   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:30.376054   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:30.411673   69333 cri.go:89] found id: ""
	I0927 01:42:30.411700   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.411710   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:30.411718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:30.411765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:30.447784   69333 cri.go:89] found id: ""
	I0927 01:42:30.447812   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.447822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:30.447830   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:30.447890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:30.483164   69333 cri.go:89] found id: ""
	I0927 01:42:30.483191   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.483202   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:30.483213   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:30.483229   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:30.533490   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:30.533522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:30.547688   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:30.547722   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:30.626696   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:30.626720   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:30.626733   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:30.708767   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:30.708809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.020786   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.021243   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.292370   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.791420   68676 pod_ready.go:93] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.791444   68676 pod_ready.go:82] duration metric: took 5.00585892s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.791454   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796509   68676 pod_ready.go:93] pod "kube-proxy-wkcb8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.796528   68676 pod_ready.go:82] duration metric: took 5.067798ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796536   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801041   68676 pod_ready.go:93] pod "kube-scheduler-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.801066   68676 pod_ready.go:82] duration metric: took 4.523416ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801087   68676 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:34.807359   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.042216   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:34.541390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:33.250034   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:33.263733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:33.263805   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:33.298038   69333 cri.go:89] found id: ""
	I0927 01:42:33.298063   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.298071   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:33.298077   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:33.298139   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:33.338027   69333 cri.go:89] found id: ""
	I0927 01:42:33.338050   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.338058   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:33.338064   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:33.338118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:33.376470   69333 cri.go:89] found id: ""
	I0927 01:42:33.376496   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.376504   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:33.376509   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:33.376568   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:33.419831   69333 cri.go:89] found id: ""
	I0927 01:42:33.419859   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.419868   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:33.419874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:33.419929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:33.461029   69333 cri.go:89] found id: ""
	I0927 01:42:33.461057   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.461076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:33.461085   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:33.461158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:33.499968   69333 cri.go:89] found id: ""
	I0927 01:42:33.499996   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.500007   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:33.500015   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:33.500073   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:33.552601   69333 cri.go:89] found id: ""
	I0927 01:42:33.552625   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.552633   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:33.552640   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:33.552702   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:33.589491   69333 cri.go:89] found id: ""
	I0927 01:42:33.589520   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.589529   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:33.589540   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:33.589554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:33.643437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:33.643470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:33.657819   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:33.657846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:33.728369   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:33.728393   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:33.728407   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:33.803661   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:33.803691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:36.343598   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:36.357879   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:36.357937   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:36.398936   69333 cri.go:89] found id: ""
	I0927 01:42:36.398958   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.398966   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:36.398971   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:36.399016   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:36.438897   69333 cri.go:89] found id: ""
	I0927 01:42:36.438921   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.438928   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:36.438935   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:36.438979   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:36.476779   69333 cri.go:89] found id: ""
	I0927 01:42:36.476807   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.476817   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:36.476824   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:36.476882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:36.514216   69333 cri.go:89] found id: ""
	I0927 01:42:36.514238   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.514245   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:36.514251   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:36.514306   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:36.551800   69333 cri.go:89] found id: ""
	I0927 01:42:36.551827   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.551835   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:36.551841   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:36.551900   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:36.592060   69333 cri.go:89] found id: ""
	I0927 01:42:36.592086   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.592096   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:36.592101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:36.592172   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:36.633485   69333 cri.go:89] found id: ""
	I0927 01:42:36.633507   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.633514   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:36.633519   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:36.633571   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:36.667288   69333 cri.go:89] found id: ""
	I0927 01:42:36.667355   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.667366   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:36.667377   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:36.667391   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:36.722230   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:36.722263   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:36.735927   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:36.735952   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:36.808852   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:36.808872   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:36.808887   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:36.889259   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:36.889299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:34.520143   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.521254   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.808388   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.308743   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.542085   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.042119   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.438818   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:39.459082   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:39.459150   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:39.499966   69333 cri.go:89] found id: ""
	I0927 01:42:39.499991   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.499999   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:39.500004   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:39.500050   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:39.540828   69333 cri.go:89] found id: ""
	I0927 01:42:39.540850   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.540857   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:39.540864   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:39.540972   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:39.575841   69333 cri.go:89] found id: ""
	I0927 01:42:39.575868   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.575879   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:39.575886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:39.575958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:39.611105   69333 cri.go:89] found id: ""
	I0927 01:42:39.611184   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.611202   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:39.611212   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:39.611268   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:39.644772   69333 cri.go:89] found id: ""
	I0927 01:42:39.644800   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.644808   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:39.644813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:39.644868   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:39.679875   69333 cri.go:89] found id: ""
	I0927 01:42:39.679901   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.679912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:39.679919   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:39.679987   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:39.716410   69333 cri.go:89] found id: ""
	I0927 01:42:39.716440   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.716450   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:39.716457   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:39.716525   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:39.750391   69333 cri.go:89] found id: ""
	I0927 01:42:39.750418   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.750428   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:39.750439   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:39.750455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:39.822365   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:39.822401   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:39.822416   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:39.905982   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:39.906017   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:39.952310   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:39.952339   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:40.000523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:40.000554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:39.021945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.519787   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.807532   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:44.307548   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.042260   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:43.042762   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.542112   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:42.514379   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:42.528312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:42.528377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:42.562427   69333 cri.go:89] found id: ""
	I0927 01:42:42.562455   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.562463   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:42.562469   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:42.562526   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:42.599969   69333 cri.go:89] found id: ""
	I0927 01:42:42.599993   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.600002   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:42.600007   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:42.600053   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:42.636338   69333 cri.go:89] found id: ""
	I0927 01:42:42.636364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.636371   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:42.636376   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:42.636431   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:42.670781   69333 cri.go:89] found id: ""
	I0927 01:42:42.670809   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.670818   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:42.670823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:42.670880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:42.707334   69333 cri.go:89] found id: ""
	I0927 01:42:42.707364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.707375   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:42.707431   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:42.707503   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:42.743063   69333 cri.go:89] found id: ""
	I0927 01:42:42.743092   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.743103   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:42.743139   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:42.743192   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:42.778593   69333 cri.go:89] found id: ""
	I0927 01:42:42.778617   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.778628   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:42.778634   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:42.778700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:42.814261   69333 cri.go:89] found id: ""
	I0927 01:42:42.814286   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.814293   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:42.814300   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:42.814310   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:42.863982   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:42.864011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:42.877151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:42.877175   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:42.959233   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:42.959251   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:42.959262   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:43.038773   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:43.038805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:45.581272   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:45.596103   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:45.596167   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:45.639507   69333 cri.go:89] found id: ""
	I0927 01:42:45.639531   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.639539   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:45.639545   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:45.639611   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:45.678455   69333 cri.go:89] found id: ""
	I0927 01:42:45.678482   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.678489   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:45.678495   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:45.678539   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:45.722094   69333 cri.go:89] found id: ""
	I0927 01:42:45.722123   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.722135   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:45.722142   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:45.722211   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:45.758091   69333 cri.go:89] found id: ""
	I0927 01:42:45.758118   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.758127   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:45.758133   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:45.758183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:45.792976   69333 cri.go:89] found id: ""
	I0927 01:42:45.793010   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.793021   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:45.793028   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:45.793089   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:45.830235   69333 cri.go:89] found id: ""
	I0927 01:42:45.830262   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.830273   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:45.830280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:45.830324   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:45.865896   69333 cri.go:89] found id: ""
	I0927 01:42:45.865928   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.865938   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:45.865946   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:45.866000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:45.900058   69333 cri.go:89] found id: ""
	I0927 01:42:45.900088   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.900099   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:45.900108   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:45.900119   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:45.972986   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:45.973015   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:45.973030   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:46.048703   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:46.048732   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:46.087483   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:46.087515   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:46.136833   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:46.136866   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:43.520998   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.522532   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.020912   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:46.307637   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.808963   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.041757   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:50.042259   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.650738   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:48.665847   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:48.665930   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:48.704304   69333 cri.go:89] found id: ""
	I0927 01:42:48.704328   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.704337   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:48.704342   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:48.704402   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:48.742469   69333 cri.go:89] found id: ""
	I0927 01:42:48.742499   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.742510   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:48.742517   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:48.742579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:48.782154   69333 cri.go:89] found id: ""
	I0927 01:42:48.782183   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.782194   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:48.782201   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:48.782261   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:48.821686   69333 cri.go:89] found id: ""
	I0927 01:42:48.821709   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.821717   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:48.821723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:48.821781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:48.867072   69333 cri.go:89] found id: ""
	I0927 01:42:48.867099   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.867109   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:48.867123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:48.867191   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:48.908215   69333 cri.go:89] found id: ""
	I0927 01:42:48.908241   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.908249   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:48.908255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:48.908312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:48.945260   69333 cri.go:89] found id: ""
	I0927 01:42:48.945291   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.945303   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:48.945310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:48.945375   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:48.983285   69333 cri.go:89] found id: ""
	I0927 01:42:48.983325   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.983333   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:48.983343   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:48.983354   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:49.039437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:49.039472   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:49.053546   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:49.053571   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:49.129264   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:49.129286   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:49.129299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:49.216967   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:49.216999   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:51.758143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:51.771417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:51.771485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:51.806120   69333 cri.go:89] found id: ""
	I0927 01:42:51.806144   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.806154   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:51.806161   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:51.806219   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:51.840301   69333 cri.go:89] found id: ""
	I0927 01:42:51.840330   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.840340   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:51.840348   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:51.840410   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:51.874908   69333 cri.go:89] found id: ""
	I0927 01:42:51.874934   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.874944   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:51.874952   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:51.875018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:51.910960   69333 cri.go:89] found id: ""
	I0927 01:42:51.910988   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.910999   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:51.911006   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:51.911064   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:51.945206   69333 cri.go:89] found id: ""
	I0927 01:42:51.945228   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.945236   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:51.945241   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:51.945289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:51.979262   69333 cri.go:89] found id: ""
	I0927 01:42:51.979296   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.979322   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:51.979328   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:51.979384   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:52.013407   69333 cri.go:89] found id: ""
	I0927 01:42:52.013438   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.013449   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:52.013456   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:52.013510   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:52.048928   69333 cri.go:89] found id: ""
	I0927 01:42:52.048951   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.048961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:52.048970   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:52.048984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:52.101043   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:52.101083   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:52.115903   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:52.115938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:52.197147   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:52.197168   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:52.197184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:52.276352   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:52.276393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:50.021730   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.520362   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:51.306847   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:53.307714   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.042729   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.544118   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.819649   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:54.832262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:54.832344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:54.867495   69333 cri.go:89] found id: ""
	I0927 01:42:54.867523   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.867533   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:54.867539   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:54.867585   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:54.899705   69333 cri.go:89] found id: ""
	I0927 01:42:54.899732   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.899742   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:54.899749   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:54.899817   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:54.939216   69333 cri.go:89] found id: ""
	I0927 01:42:54.939235   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.939244   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:54.939249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:54.939293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:54.976603   69333 cri.go:89] found id: ""
	I0927 01:42:54.976632   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.976643   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:54.976651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:54.976718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:55.011617   69333 cri.go:89] found id: ""
	I0927 01:42:55.011649   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.011660   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:55.011667   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:55.011729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:55.048836   69333 cri.go:89] found id: ""
	I0927 01:42:55.048861   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.048869   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:55.048885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:55.048955   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:55.085105   69333 cri.go:89] found id: ""
	I0927 01:42:55.085133   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.085144   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:55.085151   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:55.085205   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:55.122536   69333 cri.go:89] found id: ""
	I0927 01:42:55.122564   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.122575   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:55.122585   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:55.122600   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:55.197191   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:55.197216   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:55.197230   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:55.275914   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:55.275950   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:55.315043   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:55.315071   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:55.365808   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:55.365846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:55.025083   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.520041   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:55.807377   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.807419   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.808202   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.042511   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.541628   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.880934   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:57.894276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:57.894337   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:57.933299   69333 cri.go:89] found id: ""
	I0927 01:42:57.933326   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.933336   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:57.933343   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:57.933403   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:57.969070   69333 cri.go:89] found id: ""
	I0927 01:42:57.969094   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.969102   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:57.969107   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:57.969151   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:58.009432   69333 cri.go:89] found id: ""
	I0927 01:42:58.009453   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.009462   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:58.009468   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:58.009524   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:58.046507   69333 cri.go:89] found id: ""
	I0927 01:42:58.046526   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.046533   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:58.046539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:58.046603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:58.079910   69333 cri.go:89] found id: ""
	I0927 01:42:58.079936   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.079947   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:58.079954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:58.080015   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:58.115971   69333 cri.go:89] found id: ""
	I0927 01:42:58.115994   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.116001   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:58.116007   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:58.116065   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:58.150512   69333 cri.go:89] found id: ""
	I0927 01:42:58.150536   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.150544   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:58.150549   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:58.150608   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:58.183458   69333 cri.go:89] found id: ""
	I0927 01:42:58.183487   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.183498   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:58.183506   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:58.183520   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:58.234404   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:58.234434   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:58.248387   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:58.248411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:58.320751   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:58.320772   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:58.320783   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:58.401163   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:58.401212   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:00.943677   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:00.956739   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:00.956815   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:00.991020   69333 cri.go:89] found id: ""
	I0927 01:43:00.991042   69333 logs.go:276] 0 containers: []
	W0927 01:43:00.991051   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:00.991056   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:00.991113   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:01.031686   69333 cri.go:89] found id: ""
	I0927 01:43:01.031711   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.031720   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:01.031726   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:01.031786   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:01.068783   69333 cri.go:89] found id: ""
	I0927 01:43:01.068813   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.068824   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:01.068831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:01.068890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:01.108434   69333 cri.go:89] found id: ""
	I0927 01:43:01.108456   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.108464   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:01.108469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:01.108513   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:01.147574   69333 cri.go:89] found id: ""
	I0927 01:43:01.147596   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.147604   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:01.147610   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:01.147660   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:01.188251   69333 cri.go:89] found id: ""
	I0927 01:43:01.188279   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.188290   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:01.188297   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:01.188359   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:01.224901   69333 cri.go:89] found id: ""
	I0927 01:43:01.224944   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.224964   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:01.224974   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:01.225052   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:01.262701   69333 cri.go:89] found id: ""
	I0927 01:43:01.262728   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.262738   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:01.262749   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:01.262762   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:01.313872   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:01.313900   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:01.327809   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:01.327835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:01.400864   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:01.400895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:01.400909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:01.478012   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:01.478045   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:59.520973   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.522457   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:02.308215   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.309111   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.543151   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.043201   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.018634   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:04.032732   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:04.032803   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:04.075258   69333 cri.go:89] found id: ""
	I0927 01:43:04.075285   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.075293   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:04.075299   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:04.075381   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:04.108738   69333 cri.go:89] found id: ""
	I0927 01:43:04.108764   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.108773   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:04.108779   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:04.108835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:04.142115   69333 cri.go:89] found id: ""
	I0927 01:43:04.142145   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.142155   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:04.142174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:04.142249   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:04.184606   69333 cri.go:89] found id: ""
	I0927 01:43:04.184626   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.184634   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:04.184639   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:04.184684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:04.218391   69333 cri.go:89] found id: ""
	I0927 01:43:04.218420   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.218428   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:04.218434   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:04.218482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.253796   69333 cri.go:89] found id: ""
	I0927 01:43:04.253816   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.253824   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:04.253829   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:04.253884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:04.289147   69333 cri.go:89] found id: ""
	I0927 01:43:04.289170   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.289179   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:04.289184   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:04.289245   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:04.329000   69333 cri.go:89] found id: ""
	I0927 01:43:04.329026   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.329034   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:04.329042   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:04.329053   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:04.424255   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:04.424290   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:04.470746   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:04.470775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:04.524208   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:04.524237   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:04.538338   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:04.538365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:04.608713   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.109492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:07.124253   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:07.124332   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:07.160443   69333 cri.go:89] found id: ""
	I0927 01:43:07.160470   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.160481   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:07.160488   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:07.160554   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:07.195492   69333 cri.go:89] found id: ""
	I0927 01:43:07.195515   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.195522   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:07.195527   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:07.195572   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:07.237678   69333 cri.go:89] found id: ""
	I0927 01:43:07.237708   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.237718   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:07.237725   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:07.237792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:07.274239   69333 cri.go:89] found id: ""
	I0927 01:43:07.274268   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.274279   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:07.274286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:07.274352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:07.315099   69333 cri.go:89] found id: ""
	I0927 01:43:07.315124   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.315131   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:07.315137   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:07.315190   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.020911   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.520371   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.807124   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.306568   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.543210   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.042166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:07.356301   69333 cri.go:89] found id: ""
	I0927 01:43:07.356328   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.356339   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:07.356347   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:07.356416   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:07.392204   69333 cri.go:89] found id: ""
	I0927 01:43:07.392232   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.392242   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:07.392255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:07.392312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:07.428924   69333 cri.go:89] found id: ""
	I0927 01:43:07.428952   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.428961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:07.428969   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:07.428981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:07.502507   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.502531   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:07.502545   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:07.584169   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:07.584201   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:07.623413   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:07.623446   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:07.675444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:07.675480   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.190164   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:10.205315   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:10.205395   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:10.244030   69333 cri.go:89] found id: ""
	I0927 01:43:10.244053   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.244063   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:10.244071   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:10.244134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:10.280081   69333 cri.go:89] found id: ""
	I0927 01:43:10.280108   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.280118   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:10.280125   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:10.280184   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:10.315428   69333 cri.go:89] found id: ""
	I0927 01:43:10.315454   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.315464   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:10.315471   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:10.315531   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:10.352536   69333 cri.go:89] found id: ""
	I0927 01:43:10.352560   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.352567   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:10.352574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:10.352634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:10.388846   69333 cri.go:89] found id: ""
	I0927 01:43:10.388870   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.388880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:10.388887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:10.388951   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:10.427746   69333 cri.go:89] found id: ""
	I0927 01:43:10.427771   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.427779   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:10.427784   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:10.427839   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:10.473126   69333 cri.go:89] found id: ""
	I0927 01:43:10.473155   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.473166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:10.473172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:10.473234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:10.511925   69333 cri.go:89] found id: ""
	I0927 01:43:10.511954   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.511962   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:10.511971   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:10.511984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:10.551428   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:10.551459   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:10.603655   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:10.603691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.617232   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:10.617266   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:10.696559   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:10.696585   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:10.696599   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:09.020784   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.521429   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.307081   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.307876   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.043819   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.543289   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.273888   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:13.288271   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:13.288349   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:13.325796   69333 cri.go:89] found id: ""
	I0927 01:43:13.325823   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.325831   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:13.325837   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:13.325893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:13.360721   69333 cri.go:89] found id: ""
	I0927 01:43:13.360748   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.360756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:13.360762   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:13.360821   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:13.399722   69333 cri.go:89] found id: ""
	I0927 01:43:13.399749   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.399756   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:13.399762   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:13.399826   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:13.437161   69333 cri.go:89] found id: ""
	I0927 01:43:13.437187   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.437194   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:13.437200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:13.437260   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:13.474735   69333 cri.go:89] found id: ""
	I0927 01:43:13.474758   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.474766   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:13.474771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:13.474822   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:13.528726   69333 cri.go:89] found id: ""
	I0927 01:43:13.528754   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.528764   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:13.528771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:13.528837   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:13.568617   69333 cri.go:89] found id: ""
	I0927 01:43:13.568642   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.568651   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:13.568658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:13.568726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:13.605820   69333 cri.go:89] found id: ""
	I0927 01:43:13.605846   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.605857   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:13.605868   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:13.605883   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:13.682586   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:13.682609   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:13.682624   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:13.764487   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:13.764522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:13.809248   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:13.809280   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:13.861331   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:13.861371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:16.376981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:16.391787   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:16.391842   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:16.432731   69333 cri.go:89] found id: ""
	I0927 01:43:16.432758   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.432767   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:16.432775   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:16.432836   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:16.466769   69333 cri.go:89] found id: ""
	I0927 01:43:16.466798   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.466806   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:16.466812   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:16.466860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:16.501899   69333 cri.go:89] found id: ""
	I0927 01:43:16.501927   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.501940   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:16.501947   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:16.502000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:16.537356   69333 cri.go:89] found id: ""
	I0927 01:43:16.537383   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.537393   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:16.537401   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:16.537460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:16.573910   69333 cri.go:89] found id: ""
	I0927 01:43:16.573937   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.573946   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:16.573951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:16.574003   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:16.617780   69333 cri.go:89] found id: ""
	I0927 01:43:16.617808   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.617818   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:16.617826   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:16.617884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:16.653262   69333 cri.go:89] found id: ""
	I0927 01:43:16.653311   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.653323   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:16.653331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:16.653394   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:16.689861   69333 cri.go:89] found id: ""
	I0927 01:43:16.689889   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.689898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:16.689909   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:16.689922   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:16.765961   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:16.765986   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:16.766001   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:16.845195   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:16.845227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:16.889159   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:16.889188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:16.945523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:16.945558   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:13.522444   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.021202   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:15.808665   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.307884   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.043071   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.541709   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:19.461132   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:19.475148   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:19.475234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:19.511487   69333 cri.go:89] found id: ""
	I0927 01:43:19.511509   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.511517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:19.511522   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:19.511580   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:19.545726   69333 cri.go:89] found id: ""
	I0927 01:43:19.545750   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.545756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:19.545763   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:19.545830   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:19.581287   69333 cri.go:89] found id: ""
	I0927 01:43:19.581310   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.581318   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:19.581323   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:19.581376   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:19.614179   69333 cri.go:89] found id: ""
	I0927 01:43:19.614205   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.614215   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:19.614223   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:19.614286   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:19.648276   69333 cri.go:89] found id: ""
	I0927 01:43:19.648307   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.648318   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:19.648330   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:19.648390   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:19.683051   69333 cri.go:89] found id: ""
	I0927 01:43:19.683083   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.683094   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:19.683114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:19.683166   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:19.716664   69333 cri.go:89] found id: ""
	I0927 01:43:19.716686   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.716694   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:19.716700   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:19.716745   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:19.758948   69333 cri.go:89] found id: ""
	I0927 01:43:19.758969   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.758976   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:19.758984   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:19.758996   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:19.797751   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:19.797777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:19.853605   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:19.853635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:19.867785   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:19.867815   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:19.950323   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:19.950350   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:19.950363   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:18.520291   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.520845   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.520886   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.808171   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.812047   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:21.043160   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:23.546462   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.555421   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:22.570013   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:22.570077   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:22.605007   69333 cri.go:89] found id: ""
	I0927 01:43:22.605034   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.605055   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:22.605062   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:22.605122   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:22.640350   69333 cri.go:89] found id: ""
	I0927 01:43:22.640381   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.640391   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:22.640406   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:22.640482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:22.677464   69333 cri.go:89] found id: ""
	I0927 01:43:22.677489   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.677499   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:22.677506   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:22.677567   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:22.721978   69333 cri.go:89] found id: ""
	I0927 01:43:22.722017   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.722025   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:22.722032   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:22.722093   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:22.757694   69333 cri.go:89] found id: ""
	I0927 01:43:22.757720   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.757729   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:22.757733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:22.757781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:22.793872   69333 cri.go:89] found id: ""
	I0927 01:43:22.793903   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.793912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:22.793920   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:22.793971   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:22.830620   69333 cri.go:89] found id: ""
	I0927 01:43:22.830652   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.830662   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:22.830669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:22.830732   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:22.867341   69333 cri.go:89] found id: ""
	I0927 01:43:22.867370   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.867381   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:22.867392   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:22.867405   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:22.939592   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:22.939630   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:22.939654   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:23.016407   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:23.016447   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:23.058490   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:23.058522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:23.109527   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:23.109560   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:25.626109   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:25.645254   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:25.645343   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:25.707951   69333 cri.go:89] found id: ""
	I0927 01:43:25.707979   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.707989   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:25.707997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:25.708057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:25.771210   69333 cri.go:89] found id: ""
	I0927 01:43:25.771234   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.771242   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:25.771248   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:25.771295   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:25.808206   69333 cri.go:89] found id: ""
	I0927 01:43:25.808235   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.808245   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:25.808252   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:25.808311   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:25.842236   69333 cri.go:89] found id: ""
	I0927 01:43:25.842265   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.842275   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:25.842283   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:25.842328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:25.879220   69333 cri.go:89] found id: ""
	I0927 01:43:25.879248   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.879256   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:25.879262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:25.879333   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:25.913491   69333 cri.go:89] found id: ""
	I0927 01:43:25.913522   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.913532   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:25.913537   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:25.913595   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:25.946867   69333 cri.go:89] found id: ""
	I0927 01:43:25.946887   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.946894   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:25.946899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:25.946943   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:25.983792   69333 cri.go:89] found id: ""
	I0927 01:43:25.983813   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.983820   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:25.983828   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:25.983838   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:26.030169   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:26.030195   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:26.083242   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:26.083276   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:26.097109   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:26.097136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:26.168675   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:26.168703   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:26.168715   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:24.521923   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.020053   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:25.308150   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.308307   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:29.308818   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:26.042436   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.541895   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:30.542444   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.750349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:28.765211   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:28.765269   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:28.804760   69333 cri.go:89] found id: ""
	I0927 01:43:28.804784   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.804792   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:28.804798   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:28.804865   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:28.842576   69333 cri.go:89] found id: ""
	I0927 01:43:28.842597   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.842604   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:28.842612   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:28.842674   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:28.877498   69333 cri.go:89] found id: ""
	I0927 01:43:28.877529   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.877541   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:28.877553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:28.877615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:28.912583   69333 cri.go:89] found id: ""
	I0927 01:43:28.912609   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.912620   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:28.912627   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:28.912689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:28.947995   69333 cri.go:89] found id: ""
	I0927 01:43:28.948019   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.948030   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:28.948037   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:28.948135   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:28.984445   69333 cri.go:89] found id: ""
	I0927 01:43:28.984470   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.984480   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:28.984488   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:28.984551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:29.020345   69333 cri.go:89] found id: ""
	I0927 01:43:29.020374   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.020385   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:29.020392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:29.020451   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:29.056204   69333 cri.go:89] found id: ""
	I0927 01:43:29.056234   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.056245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:29.056257   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:29.056270   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:29.127936   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:29.127963   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:29.127980   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:29.205933   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:29.205981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.248745   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:29.248777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:29.302316   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:29.302348   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:31.817566   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:31.831179   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:31.831253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:31.868480   69333 cri.go:89] found id: ""
	I0927 01:43:31.868507   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.868517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:31.868528   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:31.868588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:31.901656   69333 cri.go:89] found id: ""
	I0927 01:43:31.901684   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.901694   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:31.901701   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:31.901761   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:31.937101   69333 cri.go:89] found id: ""
	I0927 01:43:31.937133   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.937145   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:31.937153   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:31.937210   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:31.970724   69333 cri.go:89] found id: ""
	I0927 01:43:31.970750   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.970761   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:31.970768   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:31.970835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:32.003704   69333 cri.go:89] found id: ""
	I0927 01:43:32.003736   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.003747   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:32.003754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:32.003813   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:32.038840   69333 cri.go:89] found id: ""
	I0927 01:43:32.038869   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.038879   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:32.038886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:32.038946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:32.075506   69333 cri.go:89] found id: ""
	I0927 01:43:32.075534   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.075545   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:32.075552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:32.075603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:32.112983   69333 cri.go:89] found id: ""
	I0927 01:43:32.113009   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.113020   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:32.113031   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:32.113046   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:32.168192   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:32.168227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:32.182702   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:32.182727   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:32.255797   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:32.255824   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:32.255835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:32.336083   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:32.336115   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.022764   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.520495   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.308851   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.807870   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.041600   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:34.880981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:34.894904   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:34.894976   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:34.933459   69333 cri.go:89] found id: ""
	I0927 01:43:34.933482   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.933490   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:34.933498   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:34.933555   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:34.966893   69333 cri.go:89] found id: ""
	I0927 01:43:34.966917   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.966926   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:34.966933   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:34.966992   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:35.002878   69333 cri.go:89] found id: ""
	I0927 01:43:35.002899   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.002907   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:35.002912   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:35.002970   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:35.039871   69333 cri.go:89] found id: ""
	I0927 01:43:35.039898   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.039908   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:35.039915   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:35.039977   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:35.078229   69333 cri.go:89] found id: ""
	I0927 01:43:35.078255   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.078267   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:35.078274   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:35.078342   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:35.114369   69333 cri.go:89] found id: ""
	I0927 01:43:35.114397   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.114408   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:35.114415   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:35.114475   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:35.148072   69333 cri.go:89] found id: ""
	I0927 01:43:35.148100   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.148110   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:35.148117   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:35.148188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:35.184020   69333 cri.go:89] found id: ""
	I0927 01:43:35.184051   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.184062   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:35.184073   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:35.184086   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:35.197332   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:35.197355   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:35.273860   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:35.273889   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:35.273904   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:35.354647   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:35.354680   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:35.392622   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:35.392651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:33.521889   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:36.020067   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.021354   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.808365   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.307251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.541793   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.043418   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.943024   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:37.957265   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:37.957329   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:37.991294   69333 cri.go:89] found id: ""
	I0927 01:43:37.991348   69333 logs.go:276] 0 containers: []
	W0927 01:43:37.991362   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:37.991368   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:37.991421   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:38.026960   69333 cri.go:89] found id: ""
	I0927 01:43:38.026981   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.026990   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:38.026998   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:38.027057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:38.063540   69333 cri.go:89] found id: ""
	I0927 01:43:38.063563   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.063571   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:38.063576   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:38.063627   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:38.099554   69333 cri.go:89] found id: ""
	I0927 01:43:38.099602   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.099613   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:38.099621   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:38.099689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:38.136576   69333 cri.go:89] found id: ""
	I0927 01:43:38.136604   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.136615   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:38.136623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:38.136676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:38.170411   69333 cri.go:89] found id: ""
	I0927 01:43:38.170441   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.170452   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:38.170458   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:38.170512   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:38.211902   69333 cri.go:89] found id: ""
	I0927 01:43:38.211934   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.211945   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:38.211951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:38.212007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:38.247850   69333 cri.go:89] found id: ""
	I0927 01:43:38.247875   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.247885   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:38.247895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:38.247913   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:38.329353   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:38.329384   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:38.369114   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:38.369148   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:38.420578   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:38.420613   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:38.434019   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:38.434050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:38.517921   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.018609   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:41.032308   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:41.032370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:41.068491   69333 cri.go:89] found id: ""
	I0927 01:43:41.068518   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.068529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:41.068536   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:41.068597   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:41.106527   69333 cri.go:89] found id: ""
	I0927 01:43:41.106555   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.106565   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:41.106571   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:41.106634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:41.142846   69333 cri.go:89] found id: ""
	I0927 01:43:41.142870   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.142880   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:41.142887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:41.142949   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:41.187499   69333 cri.go:89] found id: ""
	I0927 01:43:41.187525   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.187536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:41.187544   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:41.187606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:41.226040   69333 cri.go:89] found id: ""
	I0927 01:43:41.226063   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.226070   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:41.226076   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:41.226153   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:41.261399   69333 cri.go:89] found id: ""
	I0927 01:43:41.261429   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.261440   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:41.261446   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:41.261493   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:41.300709   69333 cri.go:89] found id: ""
	I0927 01:43:41.300730   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.300737   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:41.300743   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:41.300799   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:41.335725   69333 cri.go:89] found id: ""
	I0927 01:43:41.335751   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.335759   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:41.335767   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:41.335776   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:41.387756   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:41.387788   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:41.401717   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:41.401743   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:41.479524   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.479548   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:41.479562   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:41.559926   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:41.559959   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:40.520642   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.521344   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.307769   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.807328   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.541384   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.548925   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.107615   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:44.122628   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:44.122690   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:44.163496   69333 cri.go:89] found id: ""
	I0927 01:43:44.163521   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.163529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:44.163541   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:44.163588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:44.203488   69333 cri.go:89] found id: ""
	I0927 01:43:44.203519   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.203529   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:44.203535   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:44.203600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:44.238111   69333 cri.go:89] found id: ""
	I0927 01:43:44.238141   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.238148   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:44.238154   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:44.238221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:44.272954   69333 cri.go:89] found id: ""
	I0927 01:43:44.272981   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.272991   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:44.272998   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:44.273057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:44.309700   69333 cri.go:89] found id: ""
	I0927 01:43:44.309719   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.309726   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:44.309731   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:44.309776   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:44.344532   69333 cri.go:89] found id: ""
	I0927 01:43:44.344563   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.344573   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:44.344580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:44.344641   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:44.379354   69333 cri.go:89] found id: ""
	I0927 01:43:44.379380   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.379391   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:44.379399   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:44.379461   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:44.415297   69333 cri.go:89] found id: ""
	I0927 01:43:44.415344   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.415356   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:44.415366   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:44.415381   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:44.468570   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:44.468602   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:44.483419   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:44.483453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:44.560718   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:44.560737   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:44.560753   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:44.641130   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:44.641173   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.188520   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:47.202189   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:47.202262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:47.243051   69333 cri.go:89] found id: ""
	I0927 01:43:47.243075   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.243083   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:47.243089   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:47.243155   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:47.280071   69333 cri.go:89] found id: ""
	I0927 01:43:47.280094   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.280104   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:47.280111   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:47.280170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:47.318458   69333 cri.go:89] found id: ""
	I0927 01:43:47.318482   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.318492   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:47.318499   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:47.318551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:45.023799   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.522945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:45.307910   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.309781   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.807329   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.041371   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.042307   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.352891   69333 cri.go:89] found id: ""
	I0927 01:43:47.352916   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.352926   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:47.352933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:47.352997   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:47.387534   69333 cri.go:89] found id: ""
	I0927 01:43:47.387560   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.387569   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:47.387578   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:47.387646   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:47.422221   69333 cri.go:89] found id: ""
	I0927 01:43:47.422254   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.422265   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:47.422273   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:47.422330   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:47.459624   69333 cri.go:89] found id: ""
	I0927 01:43:47.459645   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.459653   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:47.459659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:47.459706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:47.494322   69333 cri.go:89] found id: ""
	I0927 01:43:47.494347   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.494355   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:47.494363   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:47.494375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:47.508031   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:47.508056   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:47.583920   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:47.583952   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:47.583968   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:47.665533   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:47.665568   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.708423   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:47.708455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.261602   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:50.275548   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:50.275607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:50.311583   69333 cri.go:89] found id: ""
	I0927 01:43:50.311610   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.311620   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:50.311627   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:50.311687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:50.347686   69333 cri.go:89] found id: ""
	I0927 01:43:50.347709   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.347721   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:50.347729   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:50.347778   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:50.386627   69333 cri.go:89] found id: ""
	I0927 01:43:50.386654   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.386663   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:50.386669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:50.386719   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:50.421512   69333 cri.go:89] found id: ""
	I0927 01:43:50.421538   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.421547   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:50.421552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:50.421603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:50.461849   69333 cri.go:89] found id: ""
	I0927 01:43:50.461872   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.461880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:50.461885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:50.461941   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:50.496517   69333 cri.go:89] found id: ""
	I0927 01:43:50.496540   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.496548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:50.496554   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:50.496600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:50.532595   69333 cri.go:89] found id: ""
	I0927 01:43:50.532619   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.532630   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:50.532638   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:50.532687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:50.573213   69333 cri.go:89] found id: ""
	I0927 01:43:50.573241   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.573252   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:50.573262   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:50.573275   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.625600   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:50.625633   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:50.639512   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:50.639535   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:50.708393   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:50.708415   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:50.708436   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:50.789812   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:50.789845   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:50.020837   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:52.021314   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.807713   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:54.308918   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.541348   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.542994   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.335858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:53.349369   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:53.349441   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:53.386922   69333 cri.go:89] found id: ""
	I0927 01:43:53.386947   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.386955   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:53.386961   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:53.387007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:53.423614   69333 cri.go:89] found id: ""
	I0927 01:43:53.423640   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.423651   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:53.423658   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:53.423721   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:53.463245   69333 cri.go:89] found id: ""
	I0927 01:43:53.463265   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.463273   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:53.463280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:53.463344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:53.502093   69333 cri.go:89] found id: ""
	I0927 01:43:53.502123   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.502133   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:53.502140   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:53.502196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:53.538616   69333 cri.go:89] found id: ""
	I0927 01:43:53.538641   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.538652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:53.538659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:53.538716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:53.578580   69333 cri.go:89] found id: ""
	I0927 01:43:53.578609   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.578617   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:53.578623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:53.578685   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:53.615240   69333 cri.go:89] found id: ""
	I0927 01:43:53.615266   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.615275   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:53.615282   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:53.615356   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:53.650987   69333 cri.go:89] found id: ""
	I0927 01:43:53.651011   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.651019   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:53.651028   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:53.651038   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:53.664817   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:53.664841   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:53.737875   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:53.737894   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:53.737909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:53.827293   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:53.827345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:53.867157   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:53.867188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:56.423435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:56.437837   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:56.437912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:56.480328   69333 cri.go:89] found id: ""
	I0927 01:43:56.480349   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.480357   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:56.480364   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:56.480427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:56.520627   69333 cri.go:89] found id: ""
	I0927 01:43:56.520651   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.520660   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:56.520667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:56.520726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:56.561527   69333 cri.go:89] found id: ""
	I0927 01:43:56.561555   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.561567   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:56.561574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:56.561634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:56.598751   69333 cri.go:89] found id: ""
	I0927 01:43:56.598783   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.598794   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:56.598801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:56.598861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:56.634378   69333 cri.go:89] found id: ""
	I0927 01:43:56.634410   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.634422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:56.634429   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:56.634489   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:56.669819   69333 cri.go:89] found id: ""
	I0927 01:43:56.669852   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.669863   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:56.669877   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:56.669929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:56.703715   69333 cri.go:89] found id: ""
	I0927 01:43:56.703740   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.703750   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:56.703757   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:56.703820   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:56.737208   69333 cri.go:89] found id: ""
	I0927 01:43:56.737234   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.737245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:56.737255   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:56.737269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:56.749933   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:56.749960   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:56.822331   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:56.822353   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:56.822369   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:56.904415   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:56.904454   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:56.947108   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:56.947136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:54.521004   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.807935   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.808046   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.041831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.042496   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:00.542924   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:59.500580   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:59.523807   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:59.523888   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:59.562931   69333 cri.go:89] found id: ""
	I0927 01:43:59.562955   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.562963   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:59.562968   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:59.563013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:59.599321   69333 cri.go:89] found id: ""
	I0927 01:43:59.599348   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.599358   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:59.599363   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:59.599418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:59.634404   69333 cri.go:89] found id: ""
	I0927 01:43:59.634431   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.634441   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:59.634448   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:59.634498   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:59.672022   69333 cri.go:89] found id: ""
	I0927 01:43:59.672052   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.672066   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:59.672074   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:59.672134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:59.704617   69333 cri.go:89] found id: ""
	I0927 01:43:59.704647   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.704657   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:59.704664   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:59.704712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:59.740479   69333 cri.go:89] found id: ""
	I0927 01:43:59.740504   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.740512   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:59.740517   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:59.740579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:59.777123   69333 cri.go:89] found id: ""
	I0927 01:43:59.777155   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.777166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:59.777174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:59.777234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:59.817780   69333 cri.go:89] found id: ""
	I0927 01:43:59.817803   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.817825   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:59.817841   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:59.817856   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:59.831252   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:59.831278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:59.901912   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:59.901936   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:59.901949   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:59.983001   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:59.983034   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:00.030989   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:00.031020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:59.020139   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.020925   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.306853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.308075   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.042494   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.043814   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:02.583949   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:02.596723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:02.596798   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:02.630927   69333 cri.go:89] found id: ""
	I0927 01:44:02.630953   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.630962   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:02.630967   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:02.631012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:02.664156   69333 cri.go:89] found id: ""
	I0927 01:44:02.664186   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.664198   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:02.664205   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:02.664259   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:02.698823   69333 cri.go:89] found id: ""
	I0927 01:44:02.698847   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.698860   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:02.698865   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:02.698913   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:02.736114   69333 cri.go:89] found id: ""
	I0927 01:44:02.736142   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.736154   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:02.736161   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:02.736221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:02.769739   69333 cri.go:89] found id: ""
	I0927 01:44:02.769763   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.769771   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:02.769785   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:02.769844   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:02.804798   69333 cri.go:89] found id: ""
	I0927 01:44:02.804871   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.804887   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:02.804898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:02.804958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:02.841197   69333 cri.go:89] found id: ""
	I0927 01:44:02.841224   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.841236   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:02.841243   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:02.841301   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:02.881278   69333 cri.go:89] found id: ""
	I0927 01:44:02.881310   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.881321   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:02.881331   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:02.881345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:02.935149   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:02.935183   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:02.950245   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:02.950273   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:03.020241   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:03.020263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:03.020277   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:03.104467   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:03.104503   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:05.643070   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:05.656656   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:05.656716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:05.694022   69333 cri.go:89] found id: ""
	I0927 01:44:05.694045   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.694053   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:05.694059   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:05.694123   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:05.728575   69333 cri.go:89] found id: ""
	I0927 01:44:05.728600   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.728607   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:05.728613   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:05.728667   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:05.768546   69333 cri.go:89] found id: ""
	I0927 01:44:05.768572   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.768583   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:05.768590   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:05.768652   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:05.809504   69333 cri.go:89] found id: ""
	I0927 01:44:05.809527   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.809536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:05.809543   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:05.809600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:05.846387   69333 cri.go:89] found id: ""
	I0927 01:44:05.846415   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.846422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:05.846428   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:05.846479   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:05.879579   69333 cri.go:89] found id: ""
	I0927 01:44:05.879608   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.879619   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:05.879626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:05.879684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:05.928932   69333 cri.go:89] found id: ""
	I0927 01:44:05.928961   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.928970   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:05.928978   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:05.929037   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:05.986463   69333 cri.go:89] found id: ""
	I0927 01:44:05.986490   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.986499   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:05.986507   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:05.986521   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:06.039984   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:06.040011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:06.053025   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:06.053051   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:06.127277   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:06.127316   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:06.127330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:06.201473   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:06.201504   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:03.520539   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:06.021584   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.808474   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.307407   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:07.542959   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.042223   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.739339   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:08.753354   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:08.753418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:08.788513   69333 cri.go:89] found id: ""
	I0927 01:44:08.788544   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.788556   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:08.788563   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:08.788648   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:08.824615   69333 cri.go:89] found id: ""
	I0927 01:44:08.824642   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.824653   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:08.824661   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:08.824724   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:08.858327   69333 cri.go:89] found id: ""
	I0927 01:44:08.858354   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.858365   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:08.858372   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:08.858430   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:08.896140   69333 cri.go:89] found id: ""
	I0927 01:44:08.896168   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.896175   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:08.896181   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:08.896229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:08.931525   69333 cri.go:89] found id: ""
	I0927 01:44:08.931547   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.931554   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:08.931560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:08.931618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:08.970224   69333 cri.go:89] found id: ""
	I0927 01:44:08.970246   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.970256   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:08.970263   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:08.970331   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:09.007213   69333 cri.go:89] found id: ""
	I0927 01:44:09.007240   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.007248   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:09.007255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:09.007334   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:09.043078   69333 cri.go:89] found id: ""
	I0927 01:44:09.043111   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.043122   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:09.043132   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:09.043147   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:09.096768   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:09.096801   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:09.110721   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:09.110747   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:09.182966   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:09.182990   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:09.183004   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:09.259497   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:09.259541   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:11.797307   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:11.812141   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:11.812196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:11.846429   69333 cri.go:89] found id: ""
	I0927 01:44:11.846468   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.846482   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:11.846489   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:11.846598   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:11.885294   69333 cri.go:89] found id: ""
	I0927 01:44:11.885322   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.885333   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:11.885339   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:11.885398   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:11.920856   69333 cri.go:89] found id: ""
	I0927 01:44:11.920884   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.920892   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:11.920898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:11.920946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:11.964540   69333 cri.go:89] found id: ""
	I0927 01:44:11.964564   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.964574   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:11.964581   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:11.964634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:12.000596   69333 cri.go:89] found id: ""
	I0927 01:44:12.000619   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.000629   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:12.000636   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:12.000697   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:12.037773   69333 cri.go:89] found id: ""
	I0927 01:44:12.037808   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.037819   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:12.037831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:12.037893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:12.074646   69333 cri.go:89] found id: ""
	I0927 01:44:12.074676   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.074687   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:12.074692   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:12.074740   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:12.111771   69333 cri.go:89] found id: ""
	I0927 01:44:12.111802   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.111813   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:12.111824   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:12.111837   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:12.160938   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:12.160971   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:12.175576   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:12.175605   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:12.245227   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:12.245263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:12.245278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:12.325161   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:12.325194   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:08.520111   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.520326   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.520755   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.808039   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.808843   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.042905   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.542272   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.867795   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:14.881053   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:14.881130   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:14.915193   69333 cri.go:89] found id: ""
	I0927 01:44:14.915224   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.915234   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:14.915241   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:14.915318   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:14.951758   69333 cri.go:89] found id: ""
	I0927 01:44:14.951789   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.951801   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:14.951808   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:14.951860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:14.987875   69333 cri.go:89] found id: ""
	I0927 01:44:14.987906   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.987917   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:14.987924   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:14.987985   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:15.025780   69333 cri.go:89] found id: ""
	I0927 01:44:15.025810   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.025820   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:15.025828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:15.025884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:15.062135   69333 cri.go:89] found id: ""
	I0927 01:44:15.062157   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.062165   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:15.062172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:15.062225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:15.097090   69333 cri.go:89] found id: ""
	I0927 01:44:15.097112   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.097119   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:15.097126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:15.097170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:15.130528   69333 cri.go:89] found id: ""
	I0927 01:44:15.130552   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.130561   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:15.130569   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:15.130615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:15.165422   69333 cri.go:89] found id: ""
	I0927 01:44:15.165450   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.165457   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:15.165465   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:15.165474   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:15.214612   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:15.214651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:15.230294   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:15.230318   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:15.303339   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:15.303362   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:15.303375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:15.382046   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:15.382081   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:14.520833   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.021034   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:15.308397   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.808221   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:16.542334   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:18.543785   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.923331   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:17.937693   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:17.937765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:17.972677   69333 cri.go:89] found id: ""
	I0927 01:44:17.972699   69333 logs.go:276] 0 containers: []
	W0927 01:44:17.972707   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:17.972714   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:17.972764   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:18.004818   69333 cri.go:89] found id: ""
	I0927 01:44:18.004846   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.004854   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:18.004860   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:18.004907   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:18.044693   69333 cri.go:89] found id: ""
	I0927 01:44:18.044716   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.044723   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:18.044728   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:18.044772   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:18.079205   69333 cri.go:89] found id: ""
	I0927 01:44:18.079235   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.079244   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:18.079249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:18.079299   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:18.115272   69333 cri.go:89] found id: ""
	I0927 01:44:18.115322   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.115335   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:18.115343   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:18.115412   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:18.150165   69333 cri.go:89] found id: ""
	I0927 01:44:18.150195   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.150206   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:18.150213   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:18.150275   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:18.184971   69333 cri.go:89] found id: ""
	I0927 01:44:18.184999   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.185009   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:18.185016   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:18.185083   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:18.219955   69333 cri.go:89] found id: ""
	I0927 01:44:18.219985   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.219997   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:18.220008   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:18.220020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:18.269713   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:18.269748   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:18.285224   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:18.285251   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:18.364887   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:18.364912   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:18.364927   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:18.450667   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:18.450706   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:20.991648   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:21.006472   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:21.006529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:21.043455   69333 cri.go:89] found id: ""
	I0927 01:44:21.043476   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.043486   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:21.043493   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:21.043549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:21.080365   69333 cri.go:89] found id: ""
	I0927 01:44:21.080391   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.080399   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:21.080405   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:21.080449   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:21.117576   69333 cri.go:89] found id: ""
	I0927 01:44:21.117624   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.117636   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:21.117642   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:21.117703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:21.154538   69333 cri.go:89] found id: ""
	I0927 01:44:21.154564   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.154576   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:21.154584   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:21.154638   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:21.190046   69333 cri.go:89] found id: ""
	I0927 01:44:21.190070   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.190080   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:21.190086   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:21.190147   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:21.226383   69333 cri.go:89] found id: ""
	I0927 01:44:21.226407   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.226417   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:21.226424   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:21.226485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:21.262090   69333 cri.go:89] found id: ""
	I0927 01:44:21.262113   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.262124   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:21.262132   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:21.262188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:21.297675   69333 cri.go:89] found id: ""
	I0927 01:44:21.297697   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.297706   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:21.297716   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:21.297728   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:21.349668   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:21.349705   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:21.364608   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:21.364635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:21.432570   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:21.432596   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:21.432612   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:21.507616   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:21.507661   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:19.520792   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.521341   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:20.307600   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:22.308557   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.807578   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.041736   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:23.041809   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:25.540974   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.054212   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:24.067954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:24.068014   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:24.107017   69333 cri.go:89] found id: ""
	I0927 01:44:24.107045   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.107056   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:24.107063   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:24.107124   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:24.144373   69333 cri.go:89] found id: ""
	I0927 01:44:24.144398   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.144406   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:24.144411   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:24.144473   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:24.180010   69333 cri.go:89] found id: ""
	I0927 01:44:24.180038   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.180048   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:24.180056   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:24.180118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:24.214387   69333 cri.go:89] found id: ""
	I0927 01:44:24.214413   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.214421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:24.214426   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:24.214472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:24.252597   69333 cri.go:89] found id: ""
	I0927 01:44:24.252623   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.252631   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:24.252643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:24.252705   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.292044   69333 cri.go:89] found id: ""
	I0927 01:44:24.292072   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.292082   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:24.292089   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:24.292158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:24.329899   69333 cri.go:89] found id: ""
	I0927 01:44:24.329924   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.329934   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:24.329940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:24.329998   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:24.367964   69333 cri.go:89] found id: ""
	I0927 01:44:24.367989   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.368000   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:24.368010   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:24.368025   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:24.384151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:24.384184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:24.456916   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:24.456940   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:24.456958   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:24.539362   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:24.539399   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:24.578384   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:24.578411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.132700   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:27.146218   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.146294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.180958   69333 cri.go:89] found id: ""
	I0927 01:44:27.180984   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.180992   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:27.180997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.181043   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.215213   69333 cri.go:89] found id: ""
	I0927 01:44:27.215236   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.215243   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:27.215249   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.215293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.258192   69333 cri.go:89] found id: ""
	I0927 01:44:27.258216   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.258226   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:27.258233   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.258289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.292717   69333 cri.go:89] found id: ""
	I0927 01:44:27.292742   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.292753   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:27.292760   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.292818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.328038   69333 cri.go:89] found id: ""
	I0927 01:44:27.328066   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.328076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:27.328083   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.328152   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.021885   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:26.520726   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.308923   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:29.807825   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.542683   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.042293   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.363513   69333 cri.go:89] found id: ""
	I0927 01:44:27.363539   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.363548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:27.363553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.363610   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.402201   69333 cri.go:89] found id: ""
	I0927 01:44:27.402223   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.402231   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:27.402237   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.402290   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.436952   69333 cri.go:89] found id: ""
	I0927 01:44:27.436979   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.436987   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:27.436995   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.437009   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.487908   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:27.487938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:27.502170   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:27.502199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:27.583909   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:27.583931   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:27.583943   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:27.660248   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:27.660286   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:30.201211   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:30.214276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:30.214350   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:30.252445   69333 cri.go:89] found id: ""
	I0927 01:44:30.252474   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.252484   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:30.252490   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:30.252538   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:30.287574   69333 cri.go:89] found id: ""
	I0927 01:44:30.287603   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.287614   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:30.287621   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:30.287693   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:30.324674   69333 cri.go:89] found id: ""
	I0927 01:44:30.324699   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.324711   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:30.324718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:30.324779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:30.360493   69333 cri.go:89] found id: ""
	I0927 01:44:30.360521   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.360531   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:30.360539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:30.360640   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:30.396219   69333 cri.go:89] found id: ""
	I0927 01:44:30.396252   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.396263   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:30.396270   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:30.396328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:30.431524   69333 cri.go:89] found id: ""
	I0927 01:44:30.431546   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.431558   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:30.431564   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:30.431607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:30.465887   69333 cri.go:89] found id: ""
	I0927 01:44:30.465915   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.465926   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:30.465933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:30.466000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:30.501364   69333 cri.go:89] found id: ""
	I0927 01:44:30.501391   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.501402   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:30.501411   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:30.501425   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:30.556344   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:30.556377   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:30.572619   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:30.572649   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:30.645996   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:30.646020   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:30.646032   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:30.737458   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:30.737531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:28.521312   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.521421   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.020699   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:31.807949   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.809414   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:32.045244   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:34.542035   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.284306   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:33.298164   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:33.298224   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:33.334599   69333 cri.go:89] found id: ""
	I0927 01:44:33.334625   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.334634   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:33.334654   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:33.334718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:33.369006   69333 cri.go:89] found id: ""
	I0927 01:44:33.369034   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.369044   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:33.369051   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:33.369119   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:33.407875   69333 cri.go:89] found id: ""
	I0927 01:44:33.407904   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.407912   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:33.407918   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:33.407974   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:33.441048   69333 cri.go:89] found id: ""
	I0927 01:44:33.441083   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.441094   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:33.441101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:33.441156   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:33.478458   69333 cri.go:89] found id: ""
	I0927 01:44:33.478503   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.478515   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:33.478522   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:33.478586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:33.513756   69333 cri.go:89] found id: ""
	I0927 01:44:33.513784   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.513795   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:33.513802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:33.513862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:33.554351   69333 cri.go:89] found id: ""
	I0927 01:44:33.554392   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.554403   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:33.554410   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:33.554472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:33.588484   69333 cri.go:89] found id: ""
	I0927 01:44:33.588512   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.588533   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:33.588544   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:33.588559   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:33.665735   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:33.665775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:33.704654   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:33.704687   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:33.755444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:33.755475   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:33.770069   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:33.770095   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:33.841531   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.341963   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:36.355219   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:36.355294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:36.395149   69333 cri.go:89] found id: ""
	I0927 01:44:36.395185   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.395196   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:36.395203   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:36.395262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:36.434620   69333 cri.go:89] found id: ""
	I0927 01:44:36.434649   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.434661   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:36.434667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:36.434729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:36.468328   69333 cri.go:89] found id: ""
	I0927 01:44:36.468349   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.468357   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:36.468362   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:36.468427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:36.506386   69333 cri.go:89] found id: ""
	I0927 01:44:36.506413   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.506421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:36.506427   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:36.506482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:36.546583   69333 cri.go:89] found id: ""
	I0927 01:44:36.546607   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.546614   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:36.546620   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:36.546665   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:36.581694   69333 cri.go:89] found id: ""
	I0927 01:44:36.581721   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.581730   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:36.581737   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:36.581782   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:36.617775   69333 cri.go:89] found id: ""
	I0927 01:44:36.617799   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.617807   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:36.617813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:36.617877   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:36.654443   69333 cri.go:89] found id: ""
	I0927 01:44:36.654470   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.654478   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:36.654486   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:36.654496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:36.705787   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:36.705817   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:36.720643   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:36.720677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:36.800037   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.800061   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:36.800091   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:36.886845   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:36.886884   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:35.023634   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.520794   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:36.307516   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:38.307899   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.041620   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.044257   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.429349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:39.442899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:39.442973   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:39.481752   69333 cri.go:89] found id: ""
	I0927 01:44:39.481782   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.481793   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:39.481799   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:39.481858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:39.516074   69333 cri.go:89] found id: ""
	I0927 01:44:39.516103   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.516114   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:39.516130   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:39.516188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.563351   69333 cri.go:89] found id: ""
	I0927 01:44:39.563375   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.563386   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:39.563392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.563455   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.601417   69333 cri.go:89] found id: ""
	I0927 01:44:39.601445   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.601455   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:39.601469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.601529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.634537   69333 cri.go:89] found id: ""
	I0927 01:44:39.634565   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.634576   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:39.634582   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.634642   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.668910   69333 cri.go:89] found id: ""
	I0927 01:44:39.668937   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.668948   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:39.668955   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.669013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.701992   69333 cri.go:89] found id: ""
	I0927 01:44:39.702014   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.702021   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:39.702027   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.702074   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.741579   69333 cri.go:89] found id: ""
	I0927 01:44:39.741601   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.741610   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:39.741618   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:39.741627   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:39.806476   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.806510   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.820228   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:39.820255   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:39.893137   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:39.893167   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.893181   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.974477   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.974514   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:40.021226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.521217   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:40.309154   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.808724   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:41.542308   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:44.042015   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.517449   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:42.532200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:42.532266   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:42.568872   69333 cri.go:89] found id: ""
	I0927 01:44:42.568901   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.568911   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:42.568919   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:42.568980   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:42.605069   69333 cri.go:89] found id: ""
	I0927 01:44:42.605220   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.605251   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:42.605261   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:42.605335   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:42.641637   69333 cri.go:89] found id: ""
	I0927 01:44:42.641665   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.641673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:42.641680   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:42.641742   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:42.677333   69333 cri.go:89] found id: ""
	I0927 01:44:42.677361   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.677376   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:42.677382   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:42.677439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:42.712456   69333 cri.go:89] found id: ""
	I0927 01:44:42.712484   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.712495   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:42.712501   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:42.712565   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:42.745109   69333 cri.go:89] found id: ""
	I0927 01:44:42.745140   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.745150   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:42.745157   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:42.745226   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:42.779427   69333 cri.go:89] found id: ""
	I0927 01:44:42.779449   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.779457   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:42.779462   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:42.779508   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:42.823920   69333 cri.go:89] found id: ""
	I0927 01:44:42.823946   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.823954   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:42.823963   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:42.823972   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:42.881345   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:42.881380   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:42.896076   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:42.896100   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:42.971775   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:42.971796   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:42.971809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:43.054461   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:43.054494   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.596681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:45.610817   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:45.610882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:45.647628   69333 cri.go:89] found id: ""
	I0927 01:44:45.647654   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.647662   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:45.647668   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:45.647715   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:45.685480   69333 cri.go:89] found id: ""
	I0927 01:44:45.685507   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.685514   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:45.685520   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:45.685573   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:45.721601   69333 cri.go:89] found id: ""
	I0927 01:44:45.721624   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.721632   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:45.721637   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:45.721700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:45.756763   69333 cri.go:89] found id: ""
	I0927 01:44:45.756788   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.756796   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:45.756802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:45.756858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:45.792891   69333 cri.go:89] found id: ""
	I0927 01:44:45.792917   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.792927   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:45.792934   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:45.792996   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:45.828716   69333 cri.go:89] found id: ""
	I0927 01:44:45.828739   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.828747   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:45.828753   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:45.828807   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:45.868813   69333 cri.go:89] found id: ""
	I0927 01:44:45.868840   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.868848   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:45.868853   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:45.868905   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:45.907281   69333 cri.go:89] found id: ""
	I0927 01:44:45.907327   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.907341   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:45.907352   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:45.907371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:45.958539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:45.958574   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:45.972540   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:45.972567   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:46.046083   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:46.046124   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:46.046141   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:46.124313   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:46.124349   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.021100   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.021435   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:45.307916   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.807187   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:49.809212   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:46.042143   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.541984   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:50.542678   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.673701   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:48.687673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:48.687744   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:48.722269   69333 cri.go:89] found id: ""
	I0927 01:44:48.722291   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.722302   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:48.722308   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:48.722370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:48.758297   69333 cri.go:89] found id: ""
	I0927 01:44:48.758318   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.758326   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:48.758331   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:48.758377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:48.792706   69333 cri.go:89] found id: ""
	I0927 01:44:48.792730   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.792738   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:48.792744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:48.792792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:48.827015   69333 cri.go:89] found id: ""
	I0927 01:44:48.827035   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.827047   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:48.827052   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:48.827095   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:48.862538   69333 cri.go:89] found id: ""
	I0927 01:44:48.862564   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.862572   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:48.862577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:48.862632   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:48.896118   69333 cri.go:89] found id: ""
	I0927 01:44:48.896144   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.896154   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:48.896166   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:48.896225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:48.932483   69333 cri.go:89] found id: ""
	I0927 01:44:48.932511   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.932519   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:48.932524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:48.932576   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:48.971864   69333 cri.go:89] found id: ""
	I0927 01:44:48.971890   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.971898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:48.971906   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:48.971919   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:49.028163   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:49.028199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:49.042780   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:49.042805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:49.116454   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:49.116476   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:49.116491   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.196048   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:49.196084   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:51.735108   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:51.749191   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:51.749258   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:51.784776   69333 cri.go:89] found id: ""
	I0927 01:44:51.784804   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.784815   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:51.784823   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:51.784880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:51.822807   69333 cri.go:89] found id: ""
	I0927 01:44:51.822836   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.822847   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:51.822854   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:51.822912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:51.858700   69333 cri.go:89] found id: ""
	I0927 01:44:51.858726   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.858737   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:51.858744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:51.858812   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:51.894945   69333 cri.go:89] found id: ""
	I0927 01:44:51.894968   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.894975   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:51.894980   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:51.895025   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:51.939475   69333 cri.go:89] found id: ""
	I0927 01:44:51.939503   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.939518   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:51.939524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:51.939569   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:51.982626   69333 cri.go:89] found id: ""
	I0927 01:44:51.982654   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.982665   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:51.982673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:51.982731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:52.050446   69333 cri.go:89] found id: ""
	I0927 01:44:52.050473   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.050483   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:52.050490   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:52.050549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:52.092637   69333 cri.go:89] found id: ""
	I0927 01:44:52.092666   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.092676   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:52.092686   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:52.092700   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:52.132135   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:52.132165   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:52.186537   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:52.186572   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:52.200001   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:52.200027   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:52.282068   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:52.282093   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:52.282108   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.021229   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.308560   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.309001   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:53.042624   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:55.043212   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.866565   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:54.880400   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:54.880460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:54.918963   69333 cri.go:89] found id: ""
	I0927 01:44:54.919004   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.919027   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:54.919036   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:54.919107   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:54.959918   69333 cri.go:89] found id: ""
	I0927 01:44:54.959947   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.959958   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:54.959965   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:54.960026   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:55.004348   69333 cri.go:89] found id: ""
	I0927 01:44:55.004370   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.004378   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:55.004392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:55.004446   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:55.045190   69333 cri.go:89] found id: ""
	I0927 01:44:55.045213   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.045220   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:55.045225   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:55.045278   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:55.087638   69333 cri.go:89] found id: ""
	I0927 01:44:55.087663   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.087671   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:55.087677   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:55.087739   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:55.126899   69333 cri.go:89] found id: ""
	I0927 01:44:55.126932   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.126943   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:55.126951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:55.127012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:55.167593   69333 cri.go:89] found id: ""
	I0927 01:44:55.167624   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.167635   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:55.167643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:55.167706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:55.208362   69333 cri.go:89] found id: ""
	I0927 01:44:55.208388   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.208399   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:55.208409   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:55.208424   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:55.247198   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:55.247221   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:55.299408   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:55.299443   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:55.315745   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:55.315775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:55.387499   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:55.387523   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:55.387539   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:54.021502   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.520627   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.807487   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:58.807902   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.541517   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:59.542233   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.968863   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:57.987921   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:57.987988   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:58.036770   69333 cri.go:89] found id: ""
	I0927 01:44:58.036802   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.036813   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:58.036824   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:58.036878   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:58.072461   69333 cri.go:89] found id: ""
	I0927 01:44:58.072484   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.072492   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:58.072499   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:58.072551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:58.107247   69333 cri.go:89] found id: ""
	I0927 01:44:58.107273   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.107284   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:58.107290   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:58.107365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:58.149050   69333 cri.go:89] found id: ""
	I0927 01:44:58.149080   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.149091   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:58.149099   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:58.149162   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:58.188167   69333 cri.go:89] found id: ""
	I0927 01:44:58.188198   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.188209   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:58.188217   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:58.188283   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:58.224291   69333 cri.go:89] found id: ""
	I0927 01:44:58.224319   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.224329   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:58.224337   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:58.224401   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:58.258786   69333 cri.go:89] found id: ""
	I0927 01:44:58.258813   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.258822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:58.258828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:58.258885   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:58.298310   69333 cri.go:89] found id: ""
	I0927 01:44:58.298338   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.298349   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:58.298359   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:58.298373   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:58.340299   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:58.340330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.395097   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:58.395130   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:58.410653   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:58.410677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:58.479437   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:58.479459   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:58.479470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.057473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:01.071746   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:01.071818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:01.112652   69333 cri.go:89] found id: ""
	I0927 01:45:01.112676   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.112684   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:01.112690   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:01.112735   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:01.146071   69333 cri.go:89] found id: ""
	I0927 01:45:01.146100   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.146111   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:01.146119   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:01.146188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:01.188640   69333 cri.go:89] found id: ""
	I0927 01:45:01.188663   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.188673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:01.188679   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:01.188743   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:01.225024   69333 cri.go:89] found id: ""
	I0927 01:45:01.225050   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.225060   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:01.225067   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:01.225128   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:01.262459   69333 cri.go:89] found id: ""
	I0927 01:45:01.262487   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.262498   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:01.262505   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:01.262560   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:01.298567   69333 cri.go:89] found id: ""
	I0927 01:45:01.298588   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.298597   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:01.298603   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:01.298647   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:01.335051   69333 cri.go:89] found id: ""
	I0927 01:45:01.335084   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.335094   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:01.335100   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:01.335149   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:01.371187   69333 cri.go:89] found id: ""
	I0927 01:45:01.371217   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.371227   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:01.371237   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:01.371252   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:01.385163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:01.385189   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:01.457256   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:01.457298   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:01.457313   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.537788   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:01.537819   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:01.580645   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:01.580672   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.521367   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.020826   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.021213   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:00.808021   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.307242   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.542831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.042010   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.131877   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:04.145175   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:04.145248   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:04.179508   69333 cri.go:89] found id: ""
	I0927 01:45:04.179535   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.179545   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:04.179552   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:04.179612   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:04.213497   69333 cri.go:89] found id: ""
	I0927 01:45:04.213533   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.213544   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:04.213551   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:04.213606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:04.249708   69333 cri.go:89] found id: ""
	I0927 01:45:04.249737   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.249747   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:04.249754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:04.249824   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:04.288283   69333 cri.go:89] found id: ""
	I0927 01:45:04.288306   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.288314   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:04.288319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:04.288368   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:04.325515   69333 cri.go:89] found id: ""
	I0927 01:45:04.325539   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.325549   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:04.325560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:04.325618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:04.363485   69333 cri.go:89] found id: ""
	I0927 01:45:04.363511   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.363521   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:04.363528   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:04.363586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:04.398834   69333 cri.go:89] found id: ""
	I0927 01:45:04.398863   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.398875   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:04.398882   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:04.398948   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:04.433408   69333 cri.go:89] found id: ""
	I0927 01:45:04.433435   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.433443   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:04.433451   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:04.433461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:04.485354   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:04.485392   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:04.499007   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:04.499031   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:04.569376   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:04.569405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:04.569420   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:04.646614   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:04.646651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.186491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:07.200510   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:07.200575   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:07.239519   69333 cri.go:89] found id: ""
	I0927 01:45:07.239542   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.239553   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:07.239562   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:07.239751   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:07.276820   69333 cri.go:89] found id: ""
	I0927 01:45:07.276854   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.276863   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:07.276870   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:07.276932   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:07.312580   69333 cri.go:89] found id: ""
	I0927 01:45:07.312604   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.312613   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:07.312619   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:07.312676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:05.520930   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.020001   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:05.807739   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.807914   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:06.042390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.542149   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.542438   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.350763   69333 cri.go:89] found id: ""
	I0927 01:45:07.350788   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.350799   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:07.350806   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:07.350861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:07.385347   69333 cri.go:89] found id: ""
	I0927 01:45:07.385376   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.385383   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:07.385389   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:07.385439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:07.420665   69333 cri.go:89] found id: ""
	I0927 01:45:07.420696   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.420708   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:07.420718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:07.420768   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:07.453707   69333 cri.go:89] found id: ""
	I0927 01:45:07.453737   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.453746   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:07.453752   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:07.453806   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:07.489467   69333 cri.go:89] found id: ""
	I0927 01:45:07.489497   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.489508   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:07.489520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:07.489531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:07.569464   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:07.569496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.609123   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:07.609160   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:07.659556   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:07.659590   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:07.673163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:07.673191   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:07.751340   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.252511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:10.266651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:10.266706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:10.304131   69333 cri.go:89] found id: ""
	I0927 01:45:10.304160   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.304171   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:10.304178   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:10.304243   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:10.339267   69333 cri.go:89] found id: ""
	I0927 01:45:10.339295   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.339321   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:10.339329   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:10.339397   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:10.376268   69333 cri.go:89] found id: ""
	I0927 01:45:10.376298   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.376308   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:10.376319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:10.376380   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:10.413944   69333 cri.go:89] found id: ""
	I0927 01:45:10.413970   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.413978   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:10.413984   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:10.414033   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:10.449205   69333 cri.go:89] found id: ""
	I0927 01:45:10.449226   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.449234   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:10.449240   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:10.449289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:10.487927   69333 cri.go:89] found id: ""
	I0927 01:45:10.487947   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.487955   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:10.487961   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:10.488018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:10.525062   69333 cri.go:89] found id: ""
	I0927 01:45:10.525085   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.525095   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:10.525102   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:10.525163   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:10.560718   69333 cri.go:89] found id: ""
	I0927 01:45:10.560768   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.560779   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:10.560790   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:10.560803   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:10.641755   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.641781   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:10.641796   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:10.719775   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:10.719807   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:10.761952   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:10.761978   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:10.815296   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:10.815330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:10.023849   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.520577   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.307967   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.807872   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:14.808602   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:13.041469   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:15.036533   69234 pod_ready.go:82] duration metric: took 4m0.000873058s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	E0927 01:45:15.036568   69234 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:45:15.036588   69234 pod_ready.go:39] duration metric: took 4m6.530278971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:15.036645   69234 kubeadm.go:597] duration metric: took 4m16.375010355s to restartPrimaryControlPlane
	W0927 01:45:15.036713   69234 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:15.036743   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:13.330300   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:13.343840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:13.343893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:13.378904   69333 cri.go:89] found id: ""
	I0927 01:45:13.378933   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.378944   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:13.378952   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:13.379010   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:13.417375   69333 cri.go:89] found id: ""
	I0927 01:45:13.417403   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.417415   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:13.417422   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:13.417482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:13.456265   69333 cri.go:89] found id: ""
	I0927 01:45:13.456291   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.456302   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:13.456310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:13.456358   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:13.502205   69333 cri.go:89] found id: ""
	I0927 01:45:13.502229   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.502240   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:13.502247   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:13.502310   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:13.543617   69333 cri.go:89] found id: ""
	I0927 01:45:13.543642   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.543652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:13.543660   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:13.543723   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:13.580268   69333 cri.go:89] found id: ""
	I0927 01:45:13.580295   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.580305   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:13.580313   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:13.580374   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:13.616681   69333 cri.go:89] found id: ""
	I0927 01:45:13.616705   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.616713   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:13.616718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:13.616765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:13.653389   69333 cri.go:89] found id: ""
	I0927 01:45:13.653412   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.653420   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:13.653430   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:13.653442   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:13.666511   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:13.666534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:13.742282   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:13.742300   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:13.742311   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:13.825800   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:13.825836   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:13.876345   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:13.876376   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.429245   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:16.443286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:16.443366   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:16.481601   69333 cri.go:89] found id: ""
	I0927 01:45:16.481626   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.481637   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:16.481645   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:16.481703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:16.513626   69333 cri.go:89] found id: ""
	I0927 01:45:16.513652   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.513659   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:16.513665   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:16.513710   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:16.552531   69333 cri.go:89] found id: ""
	I0927 01:45:16.552565   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.552574   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:16.552580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:16.552636   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:16.587252   69333 cri.go:89] found id: ""
	I0927 01:45:16.587282   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.587294   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:16.587316   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:16.587377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:16.628376   69333 cri.go:89] found id: ""
	I0927 01:45:16.628401   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.628410   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:16.628417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:16.628482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:16.669603   69333 cri.go:89] found id: ""
	I0927 01:45:16.669639   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.669651   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:16.669658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:16.669731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:16.705581   69333 cri.go:89] found id: ""
	I0927 01:45:16.705607   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.705618   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:16.705626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:16.705682   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:16.740710   69333 cri.go:89] found id: ""
	I0927 01:45:16.740735   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.740743   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:16.740759   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:16.740771   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.791025   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:16.791060   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:16.805990   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:16.806023   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:16.878313   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:16.878331   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:16.878346   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:16.966228   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:16.966269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:14.521852   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:16.522127   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:17.307853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.308018   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.512044   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:19.526801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:19.526862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:19.562063   69333 cri.go:89] found id: ""
	I0927 01:45:19.562089   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.562098   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:19.562104   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:19.562159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:19.598600   69333 cri.go:89] found id: ""
	I0927 01:45:19.598626   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.598634   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:19.598642   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:19.598712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:19.632544   69333 cri.go:89] found id: ""
	I0927 01:45:19.632564   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.632572   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:19.632577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:19.632635   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:19.671676   69333 cri.go:89] found id: ""
	I0927 01:45:19.671703   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.671713   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:19.671721   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:19.671779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:19.710321   69333 cri.go:89] found id: ""
	I0927 01:45:19.710351   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.710362   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:19.710370   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:19.710438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:19.746252   69333 cri.go:89] found id: ""
	I0927 01:45:19.746277   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.746288   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:19.746295   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:19.746354   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:19.783089   69333 cri.go:89] found id: ""
	I0927 01:45:19.783112   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.783121   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:19.783126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:19.783189   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:19.821090   69333 cri.go:89] found id: ""
	I0927 01:45:19.821117   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.821126   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:19.821134   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:19.821145   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:19.873539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:19.873575   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:19.888446   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:19.888471   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:19.958009   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:19.958034   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:19.958050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:20.037552   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:20.037587   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:19.022216   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.520606   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.808178   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:23.808273   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:22.579288   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:22.592789   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:22.592846   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:22.628148   69333 cri.go:89] found id: ""
	I0927 01:45:22.628178   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.628186   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:22.628193   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:22.628240   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:22.664162   69333 cri.go:89] found id: ""
	I0927 01:45:22.664186   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.664194   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:22.664200   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:22.664253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:22.702077   69333 cri.go:89] found id: ""
	I0927 01:45:22.702104   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.702115   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:22.702123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:22.702183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:22.739657   69333 cri.go:89] found id: ""
	I0927 01:45:22.739690   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.739700   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:22.739708   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:22.739773   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:22.774109   69333 cri.go:89] found id: ""
	I0927 01:45:22.774137   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.774148   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:22.774174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:22.774229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:22.809648   69333 cri.go:89] found id: ""
	I0927 01:45:22.809671   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.809678   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:22.809684   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:22.809729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:22.842598   69333 cri.go:89] found id: ""
	I0927 01:45:22.842620   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.842627   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:22.842632   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:22.842677   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:22.877336   69333 cri.go:89] found id: ""
	I0927 01:45:22.877364   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.877374   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:22.877382   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:22.877393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:22.930364   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:22.930395   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:22.944174   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:22.944200   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:23.025495   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:23.025520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:23.025534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:23.101813   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:23.101850   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:25.644577   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:25.657820   69333 kubeadm.go:597] duration metric: took 4m3.277962916s to restartPrimaryControlPlane
	W0927 01:45:25.657898   69333 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:25.657929   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:26.111439   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:26.128279   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:26.138354   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:26.148116   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:26.148132   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:26.148170   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:26.157965   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:26.158012   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:26.168349   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:26.177624   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:26.177692   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:26.187584   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.196800   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:26.196856   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.205894   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:26.215316   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:26.215365   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:26.224989   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:26.299149   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:45:26.299261   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:26.451113   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:26.451282   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:26.451457   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:45:26.637960   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:26.640682   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:26.640782   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:26.640865   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:26.640972   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:26.641099   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:26.641233   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:26.641317   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:26.641425   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:26.641525   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:26.641633   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:26.641901   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:26.642000   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:26.642080   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:26.782585   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:27.008743   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:27.103701   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:27.217999   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:27.238810   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:27.240191   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:27.240240   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:27.375215   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:23.521301   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.020002   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.021215   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.306744   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.308577   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:27.376992   69333 out.go:235]   - Booting up control plane ...
	I0927 01:45:27.377123   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:27.386897   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:27.387959   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:27.388954   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:27.392182   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:45:30.520717   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.019981   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:30.808251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.307139   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.020640   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.520220   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.307871   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.808604   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:41.262067   69234 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225299595s)
	I0927 01:45:41.262142   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:41.294256   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:41.304403   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:41.314288   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:41.314310   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:41.314357   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:41.323280   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:41.323335   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:41.332637   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:41.341492   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:41.341552   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:41.352259   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.361190   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:41.361244   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.370863   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:41.379674   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:41.379735   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:41.389169   69234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:41.434391   69234 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:45:41.434565   69234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:41.537712   69234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:41.537813   69234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:41.537951   69234 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:45:41.546906   69234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:41.548799   69234 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:41.548882   69234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:41.548959   69234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:41.549049   69234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:41.549133   69234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:41.549239   69234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:41.549328   69234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:41.549433   69234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:41.549531   69234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:41.549619   69234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:41.549691   69234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:41.549741   69234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:41.549813   69234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:41.594579   69234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:41.703970   69234 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:45:41.813013   69234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:41.875564   69234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:42.025627   69234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:42.026325   69234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:42.028784   69234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:39.521118   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.020563   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:40.307764   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.307974   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:44.808238   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.030464   69234 out.go:235]   - Booting up control plane ...
	I0927 01:45:42.030566   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:42.030674   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:42.031152   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:42.050207   69234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:42.058709   69234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:42.058766   69234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:42.192498   69234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:45:42.192628   69234 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:45:42.694670   69234 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.189114ms
	I0927 01:45:42.694812   69234 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:45:48.195975   69234 kubeadm.go:310] [api-check] The API server is healthy after 5.501110293s
	I0927 01:45:48.210406   69234 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:45:48.231678   69234 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:45:48.257669   69234 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:45:48.257859   69234 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-245911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:45:48.271429   69234 kubeadm.go:310] [bootstrap-token] Using token: bqds0t.3lt1vhl3zjbrkom6
	I0927 01:45:44.021019   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:46.520158   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:48.272667   69234 out.go:235]   - Configuring RBAC rules ...
	I0927 01:45:48.272775   69234 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:45:48.278773   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:45:48.290868   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:45:48.297879   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:45:48.302011   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:45:48.306217   69234 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:45:48.604161   69234 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:45:49.041505   69234 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:45:49.604127   69234 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:45:49.604867   69234 kubeadm.go:310] 
	I0927 01:45:49.604981   69234 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:45:49.605008   69234 kubeadm.go:310] 
	I0927 01:45:49.605136   69234 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:45:49.605147   69234 kubeadm.go:310] 
	I0927 01:45:49.605188   69234 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:45:49.605266   69234 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:45:49.605363   69234 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:45:49.605373   69234 kubeadm.go:310] 
	I0927 01:45:49.605446   69234 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:45:49.605455   69234 kubeadm.go:310] 
	I0927 01:45:49.605524   69234 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:45:49.605537   69234 kubeadm.go:310] 
	I0927 01:45:49.605612   69234 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:45:49.605725   69234 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:45:49.605826   69234 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:45:49.605836   69234 kubeadm.go:310] 
	I0927 01:45:49.605913   69234 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:45:49.606010   69234 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:45:49.606032   69234 kubeadm.go:310] 
	I0927 01:45:49.606130   69234 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606252   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:45:49.606276   69234 kubeadm.go:310] 	--control-plane 
	I0927 01:45:49.606282   69234 kubeadm.go:310] 
	I0927 01:45:49.606404   69234 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:45:49.606421   69234 kubeadm.go:310] 
	I0927 01:45:49.606546   69234 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606692   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:45:49.607952   69234 kubeadm.go:310] W0927 01:45:41.410128    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608322   69234 kubeadm.go:310] W0927 01:45:41.412009    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608494   69234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:45:49.608518   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:45:49.608527   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:45:49.610175   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:45:47.307006   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.307374   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.611562   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:45:49.622683   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:45:49.642326   69234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:45:49.642366   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:49.642393   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-245911 minikube.k8s.io/updated_at=2024_09_27T01_45_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=embed-certs-245911 minikube.k8s.io/primary=true
	I0927 01:45:49.677602   69234 ops.go:34] apiserver oom_adj: -16
	I0927 01:45:49.854320   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:50.355392   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:48.520718   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.520908   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.020638   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.854364   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.355074   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.855077   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.354509   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.855229   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.355204   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.854829   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:54.066909   69234 kubeadm.go:1113] duration metric: took 4.424595735s to wait for elevateKubeSystemPrivileges
	I0927 01:45:54.066954   69234 kubeadm.go:394] duration metric: took 4m55.454404762s to StartCluster
	I0927 01:45:54.066978   69234 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.067071   69234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:45:54.069732   69234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.070048   69234 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:45:54.070126   69234 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:45:54.070235   69234 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-245911"
	I0927 01:45:54.070257   69234 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-245911"
	I0927 01:45:54.070261   69234 addons.go:69] Setting default-storageclass=true in profile "embed-certs-245911"
	I0927 01:45:54.070270   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:45:54.070270   69234 addons.go:69] Setting metrics-server=true in profile "embed-certs-245911"
	I0927 01:45:54.070286   69234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-245911"
	I0927 01:45:54.070296   69234 addons.go:234] Setting addon metrics-server=true in "embed-certs-245911"
	W0927 01:45:54.070305   69234 addons.go:243] addon metrics-server should already be in state true
	W0927 01:45:54.070266   69234 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070750   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070790   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070753   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070850   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070889   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070936   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.071693   69234 out.go:177] * Verifying Kubernetes components...
	I0927 01:45:54.073034   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:45:54.087559   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38159
	I0927 01:45:54.087567   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0927 01:45:54.088061   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088074   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0927 01:45:54.088183   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088412   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088551   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088573   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088635   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088655   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088852   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088874   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088929   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089023   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.089193   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089585   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089610   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089627   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.089639   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.092683   69234 addons.go:234] Setting addon default-storageclass=true in "embed-certs-245911"
	W0927 01:45:54.092705   69234 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:45:54.092729   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.093065   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.093102   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.106496   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0927 01:45:54.106952   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.107486   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.107513   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.108098   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.108297   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.109993   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.110532   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0927 01:45:54.111066   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.111688   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.111708   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.111909   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0927 01:45:54.112156   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112338   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.112740   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.112751   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.112832   69234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:45:54.112953   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.113345   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.113372   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.114353   69234 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.114372   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:45:54.114392   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.114596   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.116175   69234 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:45:51.806801   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.808476   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:54.117315   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:45:54.117326   69234 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:45:54.117341   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.120242   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.120881   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.120903   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121161   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121224   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121452   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.121658   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.121747   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.121944   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121960   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.121677   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.122386   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.122518   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.122695   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.135920   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0927 01:45:54.136247   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.136682   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.136696   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.136971   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.137163   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.138640   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.138903   69234 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.138919   69234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:45:54.138936   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.141420   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141786   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.141803   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141966   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.142132   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.142235   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.142308   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.325790   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:45:54.375616   69234 node_ready.go:35] waiting up to 6m0s for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386626   69234 node_ready.go:49] node "embed-certs-245911" has status "Ready":"True"
	I0927 01:45:54.386646   69234 node_ready.go:38] duration metric: took 10.995073ms for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386654   69234 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:54.394605   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:45:54.458245   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.501624   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:45:54.501655   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:45:54.508690   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.548168   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:45:54.548194   69234 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:45:54.615565   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:54.615591   69234 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:45:54.655649   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:55.488749   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488849   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.488803   69234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030519069s)
	I0927 01:45:55.488934   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488942   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489266   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489282   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489290   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489298   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489377   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489393   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489401   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489409   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489511   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489528   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489540   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491047   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491082   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.491093   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.535220   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.535240   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.535604   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.535625   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.627642   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.627663   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628020   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.628025   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628047   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628055   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.628062   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628294   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628311   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628322   69234 addons.go:475] Verifying addon metrics-server=true in "embed-certs-245911"
	I0927 01:45:55.629802   69234 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0927 01:45:55.022054   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:57.520749   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:56.307903   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.807972   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:55.631245   69234 addons.go:510] duration metric: took 1.561128577s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0927 01:45:56.401813   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.900688   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:59.521353   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.014813   69534 pod_ready.go:82] duration metric: took 4m0.000584515s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:00.014858   69534 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:46:00.014878   69534 pod_ready.go:39] duration metric: took 4m13.043107791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:00.014903   69534 kubeadm.go:597] duration metric: took 4m20.409702758s to restartPrimaryControlPlane
	W0927 01:46:00.014956   69534 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:46:00.014980   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:46:00.808408   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.808672   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.901714   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.902242   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:03.401910   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.401936   69234 pod_ready.go:82] duration metric: took 9.007296678s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.401948   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908874   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.908896   69234 pod_ready.go:82] duration metric: took 506.941437ms for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908918   69234 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914117   69234 pod_ready.go:93] pod "etcd-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.914135   69234 pod_ready.go:82] duration metric: took 5.210078ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914142   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918778   69234 pod_ready.go:93] pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.918801   69234 pod_ready.go:82] duration metric: took 4.651828ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918812   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.923979   69234 pod_ready.go:93] pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.923996   69234 pod_ready.go:82] duration metric: took 5.176348ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.924004   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199586   69234 pod_ready.go:93] pod "kube-proxy-5l299" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.199612   69234 pod_ready.go:82] duration metric: took 275.601068ms for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199621   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598852   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.598880   69234 pod_ready.go:82] duration metric: took 399.251298ms for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598890   69234 pod_ready.go:39] duration metric: took 10.212226661s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:04.598905   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:04.598962   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:04.615194   69234 api_server.go:72] duration metric: took 10.545103977s to wait for apiserver process to appear ...
	I0927 01:46:04.615225   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:04.615248   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:46:04.621164   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:46:04.622001   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:04.622022   69234 api_server.go:131] duration metric: took 6.789717ms to wait for apiserver health ...
	I0927 01:46:04.622032   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:04.802641   69234 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:04.802674   69234 system_pods.go:61] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:04.802681   69234 system_pods.go:61] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:04.802687   69234 system_pods.go:61] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:04.802693   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:04.802699   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:04.802703   69234 system_pods.go:61] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:04.802708   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:04.802717   69234 system_pods.go:61] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:04.802722   69234 system_pods.go:61] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:04.802735   69234 system_pods.go:74] duration metric: took 180.694209ms to wait for pod list to return data ...
	I0927 01:46:04.802747   69234 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:04.999578   69234 default_sa.go:45] found service account: "default"
	I0927 01:46:04.999603   69234 default_sa.go:55] duration metric: took 196.845725ms for default service account to be created ...
	I0927 01:46:04.999612   69234 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:05.201201   69234 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:05.201228   69234 system_pods.go:89] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:05.201233   69234 system_pods.go:89] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:05.201237   69234 system_pods.go:89] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:05.201241   69234 system_pods.go:89] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:05.201244   69234 system_pods.go:89] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:05.201248   69234 system_pods.go:89] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:05.201251   69234 system_pods.go:89] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:05.201256   69234 system_pods.go:89] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:05.201260   69234 system_pods.go:89] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:05.201268   69234 system_pods.go:126] duration metric: took 201.651734ms to wait for k8s-apps to be running ...
	I0927 01:46:05.201275   69234 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:05.201315   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:05.216216   69234 system_svc.go:56] duration metric: took 14.930697ms WaitForService to wait for kubelet
	I0927 01:46:05.216248   69234 kubeadm.go:582] duration metric: took 11.146166369s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:05.216271   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:05.400667   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:05.400695   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:05.400708   69234 node_conditions.go:105] duration metric: took 184.432904ms to run NodePressure ...
	I0927 01:46:05.400719   69234 start.go:241] waiting for startup goroutines ...
	I0927 01:46:05.400729   69234 start.go:246] waiting for cluster config update ...
	I0927 01:46:05.400743   69234 start.go:255] writing updated cluster config ...
	I0927 01:46:05.401134   69234 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:05.452606   69234 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:05.454631   69234 out.go:177] * Done! kubectl is now configured to use "embed-certs-245911" cluster and "default" namespace by default
	I0927 01:46:05.307371   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.807981   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.393548   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:46:07.394304   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:07.394505   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:10.307311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.308085   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:14.308664   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.395176   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:12.395434   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:16.807116   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:18.807652   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:21.307348   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:23.807597   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:26.304067   69534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.289064717s)
	I0927 01:46:26.304150   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:26.341383   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:46:26.365985   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:46:26.382056   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:46:26.382082   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:46:26.382133   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:46:26.405820   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:46:26.405881   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:46:26.416355   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:46:26.426710   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:46:26.426759   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:46:26.438110   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.448631   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:46:26.448691   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.458453   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:46:26.467677   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:46:26.467724   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:46:26.478333   69534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:46:26.528377   69534 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:46:26.528432   69534 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:46:26.653799   69534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:46:26.653904   69534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:46:26.654029   69534 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:46:26.666791   69534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:46:22.395858   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:22.396073   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:26.668660   69534 out.go:235]   - Generating certificates and keys ...
	I0927 01:46:26.668739   69534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:46:26.668803   69534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:46:26.668918   69534 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:46:26.669012   69534 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:46:26.669103   69534 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:46:26.669178   69534 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:46:26.669308   69534 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:46:26.669628   69534 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:46:26.669868   69534 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:46:26.670086   69534 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:46:26.670284   69534 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:46:26.670395   69534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:46:26.885345   69534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:46:27.061416   69534 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:46:27.347409   69534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:46:27.477340   69534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:46:27.607326   69534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:46:27.607882   69534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:46:27.612459   69534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:46:27.614167   69534 out.go:235]   - Booting up control plane ...
	I0927 01:46:27.614285   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:46:27.614388   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:46:27.614482   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:46:27.635734   69534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:46:27.642550   69534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:46:27.642634   69534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:46:27.778616   69534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:46:27.778763   69534 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:46:28.280057   69534 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.328597ms
	I0927 01:46:28.280185   69534 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:46:25.808311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:28.307033   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:33.781107   69534 kubeadm.go:310] [api-check] The API server is healthy after 5.501552407s
	I0927 01:46:33.796672   69534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:46:33.809900   69534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:46:33.845968   69534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:46:33.846194   69534 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-368295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:46:33.862294   69534 kubeadm.go:310] [bootstrap-token] Using token: qmzafx.lhyo0l65zryygr2x
	I0927 01:46:30.308436   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809032   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809057   68676 pod_ready.go:82] duration metric: took 4m0.007962887s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:32.809066   68676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:46:32.809075   68676 pod_ready.go:39] duration metric: took 4m5.043455674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:32.809088   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:32.809115   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:32.809175   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:32.871610   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:32.871629   68676 cri.go:89] found id: ""
	I0927 01:46:32.871636   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:32.871682   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.878223   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:32.878296   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:32.925139   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:32.925173   68676 cri.go:89] found id: ""
	I0927 01:46:32.925182   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:32.925238   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.929961   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:32.930023   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:32.969777   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:32.969799   68676 cri.go:89] found id: ""
	I0927 01:46:32.969807   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:32.969854   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.979003   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:32.979088   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:33.029458   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:33.029532   68676 cri.go:89] found id: ""
	I0927 01:46:33.029546   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:33.029609   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.036703   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:33.036777   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:33.085041   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:33.085058   68676 cri.go:89] found id: ""
	I0927 01:46:33.085065   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:33.085125   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.090305   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:33.090372   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:33.136837   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.136857   68676 cri.go:89] found id: ""
	I0927 01:46:33.136865   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:33.136913   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.141483   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:33.141543   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:33.182913   68676 cri.go:89] found id: ""
	I0927 01:46:33.182939   68676 logs.go:276] 0 containers: []
	W0927 01:46:33.182950   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:33.182956   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:33.183002   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:33.237031   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:33.237055   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.237061   68676 cri.go:89] found id: ""
	I0927 01:46:33.237070   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:33.237121   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.241969   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.246733   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:33.246760   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:33.294096   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:33.294128   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.357981   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:33.358029   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.397465   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:33.397500   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:33.922831   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:33.922869   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:34.067117   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:34.067152   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:34.082191   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:34.082218   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:34.126416   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:34.126454   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:34.166714   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:34.166744   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:34.206601   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:34.206642   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:34.254352   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:34.254383   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:34.293318   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:34.293347   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:34.340365   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:34.340398   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:33.863782   69534 out.go:235]   - Configuring RBAC rules ...
	I0927 01:46:33.863922   69534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:46:33.871841   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:46:33.880047   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:46:33.884688   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:46:33.892057   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:46:33.895787   69534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:46:34.190553   69534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:46:34.619922   69534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:46:35.188452   69534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:46:35.189552   69534 kubeadm.go:310] 
	I0927 01:46:35.189661   69534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:46:35.189683   69534 kubeadm.go:310] 
	I0927 01:46:35.189791   69534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:46:35.189806   69534 kubeadm.go:310] 
	I0927 01:46:35.189845   69534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:46:35.189925   69534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:46:35.190002   69534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:46:35.190016   69534 kubeadm.go:310] 
	I0927 01:46:35.190095   69534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:46:35.190104   69534 kubeadm.go:310] 
	I0927 01:46:35.190181   69534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:46:35.190193   69534 kubeadm.go:310] 
	I0927 01:46:35.190264   69534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:46:35.190387   69534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:46:35.190484   69534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:46:35.190498   69534 kubeadm.go:310] 
	I0927 01:46:35.190593   69534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:46:35.190681   69534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:46:35.190691   69534 kubeadm.go:310] 
	I0927 01:46:35.190793   69534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.190948   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:46:35.191002   69534 kubeadm.go:310] 	--control-plane 
	I0927 01:46:35.191021   69534 kubeadm.go:310] 
	I0927 01:46:35.191134   69534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:46:35.191155   69534 kubeadm.go:310] 
	I0927 01:46:35.191281   69534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.191427   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:46:35.192564   69534 kubeadm.go:310] W0927 01:46:26.480521    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.192905   69534 kubeadm.go:310] W0927 01:46:26.481198    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.193078   69534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:46:35.193093   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:46:35.193102   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:46:35.194656   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:46:35.195835   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:46:35.207162   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:46:35.225999   69534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-368295 minikube.k8s.io/updated_at=2024_09_27T01_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=default-k8s-diff-port-368295 minikube.k8s.io/primary=true
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.258203   69534 ops.go:34] apiserver oom_adj: -16
	I0927 01:46:35.425367   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.926435   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.425611   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.925505   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.426329   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.926184   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.425745   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.925572   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.425831   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.508783   69534 kubeadm.go:1113] duration metric: took 4.282764601s to wait for elevateKubeSystemPrivileges
	I0927 01:46:39.508817   69534 kubeadm.go:394] duration metric: took 4m59.95903234s to StartCluster
	I0927 01:46:39.508838   69534 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.508930   69534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:46:39.510771   69534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.511005   69534 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:46:39.511071   69534 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:46:39.511194   69534 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511214   69534 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511230   69534 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511261   69534 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511276   69534 addons.go:243] addon metrics-server should already be in state true
	I0927 01:46:39.511325   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511243   69534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-368295"
	I0927 01:46:39.511225   69534 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511515   69534 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:46:39.511538   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511223   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511818   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511844   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511877   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511905   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.513051   69534 out.go:177] * Verifying Kubernetes components...
	I0927 01:46:39.514530   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:46:39.528031   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0927 01:46:39.528033   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0927 01:46:39.528446   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528603   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528997   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529022   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529085   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529101   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529210   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0927 01:46:39.529421   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.529721   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.529743   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.529724   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.530304   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.530358   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.530308   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.530423   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.530762   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.531337   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.531389   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.533286   69534 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.533306   69534 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:46:39.533333   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.533656   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.533692   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.546657   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I0927 01:46:39.546881   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0927 01:46:39.547298   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547327   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547842   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547860   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.547860   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547876   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.548220   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548239   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.548481   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.550160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550445   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0927 01:46:39.550744   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.551173   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.551195   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.551525   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.552620   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.552652   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.552838   69534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:46:39.552916   69534 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:46:36.914500   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:36.932340   68676 api_server.go:72] duration metric: took 4m14.883408931s to wait for apiserver process to appear ...
	I0927 01:46:36.932368   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:36.932407   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:36.932465   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:36.967757   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:36.967780   68676 cri.go:89] found id: ""
	I0927 01:46:36.967787   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:36.967832   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:36.972025   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:36.972105   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:37.018403   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.018431   68676 cri.go:89] found id: ""
	I0927 01:46:37.018448   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:37.018515   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.022868   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:37.022925   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:37.062443   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.062466   68676 cri.go:89] found id: ""
	I0927 01:46:37.062474   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:37.062534   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.066617   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:37.066674   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:37.101462   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.101489   68676 cri.go:89] found id: ""
	I0927 01:46:37.101500   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:37.101557   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.105564   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:37.105620   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:37.143692   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.143719   68676 cri.go:89] found id: ""
	I0927 01:46:37.143729   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:37.143775   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.148405   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:37.148484   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:37.184914   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.184943   68676 cri.go:89] found id: ""
	I0927 01:46:37.184954   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:37.185013   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.189486   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:37.189553   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:37.235389   68676 cri.go:89] found id: ""
	I0927 01:46:37.235416   68676 logs.go:276] 0 containers: []
	W0927 01:46:37.235424   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:37.235429   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:37.235480   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:37.276239   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.276266   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.276272   68676 cri.go:89] found id: ""
	I0927 01:46:37.276282   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:37.276338   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.280381   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.284423   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:37.284440   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.319790   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:37.319816   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.358818   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:37.358843   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.398137   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:37.398168   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.458672   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:37.458720   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:37.476148   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:37.476184   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:37.604190   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:37.604223   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:37.652633   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:37.652671   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.701240   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:37.701273   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.739555   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:37.739583   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.781721   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:37.781750   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:38.209361   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:38.209399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:38.261628   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:38.261658   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:39.554328   69534 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:39.554342   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:46:39.554362   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.554446   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:46:39.554456   69534 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:46:39.554469   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.557886   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.557982   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558121   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558269   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558350   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558369   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558466   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558620   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558690   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.558740   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558797   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.559026   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.559136   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.569570   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0927 01:46:39.569981   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.570364   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.570383   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.570746   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.570890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.572537   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.572779   69534 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.572795   69534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:46:39.572815   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.575104   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.575435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575595   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.575751   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.575844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.575960   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.784965   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:46:39.820986   69534 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829323   69534 node_ready.go:49] node "default-k8s-diff-port-368295" has status "Ready":"True"
	I0927 01:46:39.829346   69534 node_ready.go:38] duration metric: took 8.333848ms for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829358   69534 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:39.836143   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:39.940697   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.955239   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:46:39.955264   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:46:40.076199   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:40.080720   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:46:40.080746   69534 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:46:40.182698   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.182720   69534 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:46:40.219231   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.431480   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.431859   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.431875   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.431875   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.431889   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431898   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.432126   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.432146   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.432189   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442440   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.442468   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.442761   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442785   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.442815   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.044597   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.044627   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.044964   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.045013   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045021   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.045033   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.045041   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.045254   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045267   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.427791   69534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.208520131s)
	I0927 01:46:41.427843   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.427859   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428175   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.428184   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428196   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428205   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.428213   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428477   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428490   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428500   69534 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-368295"
	I0927 01:46:41.430399   69534 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0927 01:46:41.431795   69534 addons.go:510] duration metric: took 1.920729429s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0927 01:46:41.844911   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:40.832698   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:46:40.838244   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:46:40.839252   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:40.839270   68676 api_server.go:131] duration metric: took 3.906895557s to wait for apiserver health ...
	I0927 01:46:40.839277   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:40.839312   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:40.839373   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:40.879726   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:40.879753   68676 cri.go:89] found id: ""
	I0927 01:46:40.879763   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:40.879822   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.884233   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:40.884301   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:40.936189   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:40.936216   68676 cri.go:89] found id: ""
	I0927 01:46:40.936226   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:40.936289   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.940805   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:40.940885   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:40.978662   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:40.978683   68676 cri.go:89] found id: ""
	I0927 01:46:40.978693   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:40.978757   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.983357   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:40.983428   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:41.027134   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.027160   68676 cri.go:89] found id: ""
	I0927 01:46:41.027170   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:41.027229   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.031909   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:41.031986   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:41.077539   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.077568   68676 cri.go:89] found id: ""
	I0927 01:46:41.077577   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:41.077638   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.082237   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:41.082314   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:41.122413   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:41.122437   68676 cri.go:89] found id: ""
	I0927 01:46:41.122446   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:41.122501   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.127807   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:41.127872   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:41.174287   68676 cri.go:89] found id: ""
	I0927 01:46:41.174320   68676 logs.go:276] 0 containers: []
	W0927 01:46:41.174331   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:41.174339   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:41.174397   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:41.213192   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.213219   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:41.213225   68676 cri.go:89] found id: ""
	I0927 01:46:41.213234   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:41.213298   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.218168   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.227165   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:41.227194   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.269538   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:41.269571   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:41.691900   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:41.691943   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:41.709639   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:41.709682   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:41.829334   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:41.829366   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:41.886517   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:41.886552   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.933012   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:41.933035   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.973881   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:41.973921   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:42.032592   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:42.032628   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:42.087817   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:42.087856   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:42.162770   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:42.162808   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:42.213367   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:42.213399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:42.254937   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:42.254963   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:44.804112   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:46:44.804146   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.804153   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.804158   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.804162   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.804167   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.804171   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.804180   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.804186   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.804196   68676 system_pods.go:74] duration metric: took 3.964911623s to wait for pod list to return data ...
	I0927 01:46:44.804208   68676 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:44.807883   68676 default_sa.go:45] found service account: "default"
	I0927 01:46:44.807907   68676 default_sa.go:55] duration metric: took 3.689984ms for default service account to be created ...
	I0927 01:46:44.807917   68676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:44.812135   68676 system_pods.go:86] 8 kube-system pods found
	I0927 01:46:44.812161   68676 system_pods.go:89] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.812167   68676 system_pods.go:89] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.812174   68676 system_pods.go:89] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.812178   68676 system_pods.go:89] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.812185   68676 system_pods.go:89] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.812190   68676 system_pods.go:89] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.812200   68676 system_pods.go:89] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.812209   68676 system_pods.go:89] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.812222   68676 system_pods.go:126] duration metric: took 4.297317ms to wait for k8s-apps to be running ...
	I0927 01:46:44.812234   68676 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:44.812282   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:44.827911   68676 system_svc.go:56] duration metric: took 15.668154ms WaitForService to wait for kubelet
	I0927 01:46:44.827946   68676 kubeadm.go:582] duration metric: took 4m22.779012486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:44.827964   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:44.830688   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:44.830707   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:44.830716   68676 node_conditions.go:105] duration metric: took 2.747178ms to run NodePressure ...
	I0927 01:46:44.830725   68676 start.go:241] waiting for startup goroutines ...
	I0927 01:46:44.830732   68676 start.go:246] waiting for cluster config update ...
	I0927 01:46:44.830742   68676 start.go:255] writing updated cluster config ...
	I0927 01:46:44.830990   68676 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:44.881491   68676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:44.884307   68676 out.go:177] * Done! kubectl is now configured to use "no-preload-521072" cluster and "default" namespace by default
	I0927 01:46:42.397038   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:42.397331   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:43.845539   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:46.343584   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:48.842505   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.842527   69534 pod_ready.go:82] duration metric: took 9.006354643s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.842537   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846753   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.846771   69534 pod_ready.go:82] duration metric: took 4.228349ms for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846780   69534 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851234   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.851256   69534 pod_ready.go:82] duration metric: took 4.468727ms for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851265   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855648   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.855669   69534 pod_ready.go:82] duration metric: took 4.398439ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855678   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860882   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.860898   69534 pod_ready.go:82] duration metric: took 5.214278ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860906   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241149   69534 pod_ready.go:93] pod "kube-proxy-kqjdq" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.241180   69534 pod_ready.go:82] duration metric: took 380.266777ms for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241192   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642403   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.642437   69534 pod_ready.go:82] duration metric: took 401.235952ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642448   69534 pod_ready.go:39] duration metric: took 9.813073515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:49.642465   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:49.642518   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:49.658847   69534 api_server.go:72] duration metric: took 10.147811957s to wait for apiserver process to appear ...
	I0927 01:46:49.658877   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:49.658898   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:46:49.665899   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:46:49.666844   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:49.666867   69534 api_server.go:131] duration metric: took 7.982491ms to wait for apiserver health ...
	I0927 01:46:49.666876   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:49.843377   69534 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:49.843402   69534 system_pods.go:61] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:49.843408   69534 system_pods.go:61] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:49.843413   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:49.843417   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:49.843420   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:49.843425   69534 system_pods.go:61] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:49.843429   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:49.843437   69534 system_pods.go:61] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:49.843443   69534 system_pods.go:61] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:49.843454   69534 system_pods.go:74] duration metric: took 176.572041ms to wait for pod list to return data ...
	I0927 01:46:49.843466   69534 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:50.041031   69534 default_sa.go:45] found service account: "default"
	I0927 01:46:50.041053   69534 default_sa.go:55] duration metric: took 197.577565ms for default service account to be created ...
	I0927 01:46:50.041062   69534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:50.243807   69534 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:50.243834   69534 system_pods.go:89] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:50.243840   69534 system_pods.go:89] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:50.243845   69534 system_pods.go:89] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:50.243849   69534 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:50.243853   69534 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:50.243856   69534 system_pods.go:89] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:50.243860   69534 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:50.243866   69534 system_pods.go:89] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:50.243869   69534 system_pods.go:89] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:50.243879   69534 system_pods.go:126] duration metric: took 202.812704ms to wait for k8s-apps to be running ...
	I0927 01:46:50.243888   69534 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:50.243931   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:50.260175   69534 system_svc.go:56] duration metric: took 16.279433ms WaitForService to wait for kubelet
	I0927 01:46:50.260203   69534 kubeadm.go:582] duration metric: took 10.749173466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:50.260220   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:50.441020   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:50.441044   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:50.441052   69534 node_conditions.go:105] duration metric: took 180.827321ms to run NodePressure ...
	I0927 01:46:50.441062   69534 start.go:241] waiting for startup goroutines ...
	I0927 01:46:50.441081   69534 start.go:246] waiting for cluster config update ...
	I0927 01:46:50.441091   69534 start.go:255] writing updated cluster config ...
	I0927 01:46:50.441338   69534 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:50.492229   69534 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:50.494198   69534 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-368295" cluster and "default" namespace by default
	I0927 01:47:22.398756   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:47:22.399035   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:47:22.399051   69333 kubeadm.go:310] 
	I0927 01:47:22.399125   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:47:22.399167   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:47:22.399176   69333 kubeadm.go:310] 
	I0927 01:47:22.399242   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:47:22.399326   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:47:22.399452   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:47:22.399464   69333 kubeadm.go:310] 
	I0927 01:47:22.399627   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:47:22.399702   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:47:22.399750   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:47:22.399763   69333 kubeadm.go:310] 
	I0927 01:47:22.399908   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:47:22.400001   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:47:22.400014   69333 kubeadm.go:310] 
	I0927 01:47:22.400109   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:47:22.400218   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:47:22.400331   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:47:22.400406   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:47:22.400414   69333 kubeadm.go:310] 
	I0927 01:47:22.401157   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:47:22.401273   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:47:22.401342   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:47:22.401458   69333 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:47:22.401498   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:47:22.863316   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:47:22.878664   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:47:22.889118   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:47:22.889135   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:47:22.889173   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:47:22.898966   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:47:22.899035   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:47:22.911280   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:47:22.920628   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:47:22.920677   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:47:22.929860   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.938794   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:47:22.938839   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.947982   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:47:22.956785   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:47:22.956837   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:47:22.966186   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:47:23.039915   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:47:23.040017   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:47:23.189097   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:47:23.189274   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:47:23.189395   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:47:23.400731   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:47:23.402659   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:47:23.402776   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:47:23.402855   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:47:23.402959   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:47:23.403040   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:47:23.403162   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:47:23.403349   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:47:23.403935   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:47:23.404260   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:47:23.404563   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:47:23.404896   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:47:23.405050   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:47:23.405121   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:47:23.466908   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:47:23.717009   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:47:23.766225   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:47:23.961488   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:47:23.987846   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:47:23.988724   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:47:23.988790   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:47:24.130550   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:47:24.132276   69333 out.go:235]   - Booting up control plane ...
	I0927 01:47:24.132386   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:47:24.146415   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:47:24.147664   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:47:24.148443   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:47:24.151623   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:48:04.153587   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:48:04.153934   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:04.154129   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:09.154634   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:09.154883   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:19.155638   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:19.155844   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:39.156224   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:39.156429   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155507   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:49:19.155754   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155779   69333 kubeadm.go:310] 
	I0927 01:49:19.155872   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:49:19.155947   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:49:19.155958   69333 kubeadm.go:310] 
	I0927 01:49:19.156026   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:49:19.156077   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:49:19.156230   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:49:19.156242   69333 kubeadm.go:310] 
	I0927 01:49:19.156379   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:49:19.156434   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:49:19.156486   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:49:19.156506   69333 kubeadm.go:310] 
	I0927 01:49:19.156628   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:49:19.156756   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:49:19.156775   69333 kubeadm.go:310] 
	I0927 01:49:19.156925   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:49:19.157022   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:49:19.157112   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:49:19.157191   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:49:19.157202   69333 kubeadm.go:310] 
	I0927 01:49:19.158023   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:49:19.158149   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:49:19.158277   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:49:19.158357   69333 kubeadm.go:394] duration metric: took 7m56.829434682s to StartCluster
	I0927 01:49:19.158404   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:49:19.158477   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:49:19.200705   69333 cri.go:89] found id: ""
	I0927 01:49:19.200729   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.200736   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:49:19.200742   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:49:19.200791   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:49:19.240252   69333 cri.go:89] found id: ""
	I0927 01:49:19.240274   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.240285   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:49:19.240292   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:49:19.240352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:49:19.275802   69333 cri.go:89] found id: ""
	I0927 01:49:19.275826   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.275834   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:49:19.275840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:49:19.275894   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:49:19.309317   69333 cri.go:89] found id: ""
	I0927 01:49:19.309342   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.309350   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:49:19.309357   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:49:19.309414   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:49:19.344778   69333 cri.go:89] found id: ""
	I0927 01:49:19.344806   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.344817   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:49:19.344823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:49:19.344882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:49:19.379394   69333 cri.go:89] found id: ""
	I0927 01:49:19.379426   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.379438   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:49:19.379445   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:49:19.379502   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:49:19.415349   69333 cri.go:89] found id: ""
	I0927 01:49:19.415376   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.415384   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:49:19.415390   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:49:19.415438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:49:19.453357   69333 cri.go:89] found id: ""
	I0927 01:49:19.453381   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.453389   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:49:19.453397   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:49:19.453409   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:49:19.530384   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:49:19.530405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:49:19.530423   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:49:19.643418   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:49:19.643453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:49:19.688825   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:49:19.688861   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:49:19.745945   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:49:19.745983   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0927 01:49:19.762685   69333 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:49:19.762739   69333 out.go:270] * 
	W0927 01:49:19.762791   69333 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.762804   69333 out.go:270] * 
	W0927 01:49:19.763605   69333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:49:19.767393   69333 out.go:201] 
	W0927 01:49:19.768622   69333 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.768671   69333 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:49:19.768690   69333 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:49:19.771036   69333 out.go:201] 
	
	
	==> CRI-O <==
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.561909860Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402107561860657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f567aa1f-b9aa-4f36-898e-1751976287b4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.562909772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aec5309b-fe79-4b60-99ba-06e3c991e93f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.562981936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aec5309b-fe79-4b60-99ba-06e3c991e93f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.563248189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484,PodSandboxId:c7c1cbc5465bde39a0e13976fff50b102221c9e421bc2a7d170b15ceb86d5a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401555904498416,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c48d125-370c-44a1-9ede-536881b40d57,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7,PodSandboxId:e011a1aa370b8ae5fc35367eb3d4d947d070a20bb903edda1609fa74e0eb4c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555171115945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t4mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f9faa4-be80-40bf-9080-363fcbf3f084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81,PodSandboxId:7729e650d0540b2d4b96d124755e986a31f237f764377fdcb746d60d4e8a7044,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555000829817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zp5f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
829b4a4-1686-4f22-8368-65e3897604b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164,PodSandboxId:f1895faff7bcd580df61623c23992b57646fef48d3f42d6903a7b92cec910e3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727401553830730903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 768ae3f5-2ebd-4db7-aa36-81c4f033d685,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031,PodSandboxId:516a881b4869ebd8057279ab3fa16696c248c545290ab82b3a17ac04ff25b036,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401543255398383,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6a56e9e78ce33082942c2c1324708a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90,PodSandboxId:c4155021ba88d90e786ef042b2b1a165c27679f97053e165a756962af193e463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401543262017623,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8,PodSandboxId:4b00ff761c9ab853fe8d78a46d260f175c66a3d3762a0862958bd74a86c99336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401543253941935,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1e3ed7727ff9ba05d6eacb60c9f5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b,PodSandboxId:c89eb9ddb497f61bc8fc4315545b5c1409a54a7f104f0b3533a7e449f34f4bc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401543226505209,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566665b3d67646253c5c4233f0432cee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d,PodSandboxId:37281392c0f0c6b827059cc365c4a21d5287ae69c95a895ecf4e043d61e23dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401261835523688,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aec5309b-fe79-4b60-99ba-06e3c991e93f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.613718100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac99918a-16e6-4117-9971-ed5359d74163 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.614171680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac99918a-16e6-4117-9971-ed5359d74163 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.615608175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=280bbf24-6074-463b-88bd-9b93be792f25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.616146113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402107616117998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=280bbf24-6074-463b-88bd-9b93be792f25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.616880644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44e89fc7-f327-4aae-b55e-fc0fcba5fbf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.616954662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44e89fc7-f327-4aae-b55e-fc0fcba5fbf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.617208770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484,PodSandboxId:c7c1cbc5465bde39a0e13976fff50b102221c9e421bc2a7d170b15ceb86d5a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401555904498416,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c48d125-370c-44a1-9ede-536881b40d57,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7,PodSandboxId:e011a1aa370b8ae5fc35367eb3d4d947d070a20bb903edda1609fa74e0eb4c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555171115945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t4mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f9faa4-be80-40bf-9080-363fcbf3f084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81,PodSandboxId:7729e650d0540b2d4b96d124755e986a31f237f764377fdcb746d60d4e8a7044,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555000829817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zp5f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
829b4a4-1686-4f22-8368-65e3897604b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164,PodSandboxId:f1895faff7bcd580df61623c23992b57646fef48d3f42d6903a7b92cec910e3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727401553830730903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 768ae3f5-2ebd-4db7-aa36-81c4f033d685,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031,PodSandboxId:516a881b4869ebd8057279ab3fa16696c248c545290ab82b3a17ac04ff25b036,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401543255398383,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6a56e9e78ce33082942c2c1324708a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90,PodSandboxId:c4155021ba88d90e786ef042b2b1a165c27679f97053e165a756962af193e463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401543262017623,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8,PodSandboxId:4b00ff761c9ab853fe8d78a46d260f175c66a3d3762a0862958bd74a86c99336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401543253941935,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1e3ed7727ff9ba05d6eacb60c9f5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b,PodSandboxId:c89eb9ddb497f61bc8fc4315545b5c1409a54a7f104f0b3533a7e449f34f4bc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401543226505209,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566665b3d67646253c5c4233f0432cee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d,PodSandboxId:37281392c0f0c6b827059cc365c4a21d5287ae69c95a895ecf4e043d61e23dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401261835523688,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44e89fc7-f327-4aae-b55e-fc0fcba5fbf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.656921752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11ae42a5-79cf-4f9c-b98f-bd886d0853b6 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.656994203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11ae42a5-79cf-4f9c-b98f-bd886d0853b6 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.657981612Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e5027cc-c888-49a2-9560-c48efb1bffe3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.658464360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402107658440980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e5027cc-c888-49a2-9560-c48efb1bffe3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.659437654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e56a626-c4fb-452d-a067-ff49b55f54c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.659547362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e56a626-c4fb-452d-a067-ff49b55f54c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.660420929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484,PodSandboxId:c7c1cbc5465bde39a0e13976fff50b102221c9e421bc2a7d170b15ceb86d5a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401555904498416,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c48d125-370c-44a1-9ede-536881b40d57,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7,PodSandboxId:e011a1aa370b8ae5fc35367eb3d4d947d070a20bb903edda1609fa74e0eb4c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555171115945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t4mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f9faa4-be80-40bf-9080-363fcbf3f084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81,PodSandboxId:7729e650d0540b2d4b96d124755e986a31f237f764377fdcb746d60d4e8a7044,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555000829817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zp5f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
829b4a4-1686-4f22-8368-65e3897604b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164,PodSandboxId:f1895faff7bcd580df61623c23992b57646fef48d3f42d6903a7b92cec910e3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727401553830730903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 768ae3f5-2ebd-4db7-aa36-81c4f033d685,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031,PodSandboxId:516a881b4869ebd8057279ab3fa16696c248c545290ab82b3a17ac04ff25b036,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401543255398383,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6a56e9e78ce33082942c2c1324708a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90,PodSandboxId:c4155021ba88d90e786ef042b2b1a165c27679f97053e165a756962af193e463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401543262017623,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8,PodSandboxId:4b00ff761c9ab853fe8d78a46d260f175c66a3d3762a0862958bd74a86c99336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401543253941935,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1e3ed7727ff9ba05d6eacb60c9f5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b,PodSandboxId:c89eb9ddb497f61bc8fc4315545b5c1409a54a7f104f0b3533a7e449f34f4bc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401543226505209,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566665b3d67646253c5c4233f0432cee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d,PodSandboxId:37281392c0f0c6b827059cc365c4a21d5287ae69c95a895ecf4e043d61e23dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401261835523688,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e56a626-c4fb-452d-a067-ff49b55f54c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.694599325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4204bc06-850b-4d62-9d38-68fa68146164 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.694668797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4204bc06-850b-4d62-9d38-68fa68146164 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.696149587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=991ef0e9-3157-40b9-b33e-90d7fb205942 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.696612215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402107696588883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=991ef0e9-3157-40b9-b33e-90d7fb205942 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.697084336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83b3d7cc-4633-4053-b27e-7984687d4801 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.697136938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83b3d7cc-4633-4053-b27e-7984687d4801 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:07 embed-certs-245911 crio[720]: time="2024-09-27 01:55:07.697447432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484,PodSandboxId:c7c1cbc5465bde39a0e13976fff50b102221c9e421bc2a7d170b15ceb86d5a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401555904498416,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c48d125-370c-44a1-9ede-536881b40d57,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7,PodSandboxId:e011a1aa370b8ae5fc35367eb3d4d947d070a20bb903edda1609fa74e0eb4c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555171115945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t4mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f9faa4-be80-40bf-9080-363fcbf3f084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81,PodSandboxId:7729e650d0540b2d4b96d124755e986a31f237f764377fdcb746d60d4e8a7044,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555000829817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zp5f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
829b4a4-1686-4f22-8368-65e3897604b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164,PodSandboxId:f1895faff7bcd580df61623c23992b57646fef48d3f42d6903a7b92cec910e3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727401553830730903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 768ae3f5-2ebd-4db7-aa36-81c4f033d685,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031,PodSandboxId:516a881b4869ebd8057279ab3fa16696c248c545290ab82b3a17ac04ff25b036,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401543255398383,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6a56e9e78ce33082942c2c1324708a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90,PodSandboxId:c4155021ba88d90e786ef042b2b1a165c27679f97053e165a756962af193e463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401543262017623,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8,PodSandboxId:4b00ff761c9ab853fe8d78a46d260f175c66a3d3762a0862958bd74a86c99336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401543253941935,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1e3ed7727ff9ba05d6eacb60c9f5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b,PodSandboxId:c89eb9ddb497f61bc8fc4315545b5c1409a54a7f104f0b3533a7e449f34f4bc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401543226505209,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566665b3d67646253c5c4233f0432cee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d,PodSandboxId:37281392c0f0c6b827059cc365c4a21d5287ae69c95a895ecf4e043d61e23dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401261835523688,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83b3d7cc-4633-4053-b27e-7984687d4801 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef3e7f4404a3b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c7c1cbc5465bd       storage-provisioner
	448ea5668bbfd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   e011a1aa370b8       coredns-7c65d6cfc9-t4mxw
	86de3893b7ac9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   7729e650d0540       coredns-7c65d6cfc9-zp5f2
	fb008875dc5bc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   f1895faff7bcd       kube-proxy-5l299
	eac68ab94f64b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   c4155021ba88d       kube-apiserver-embed-certs-245911
	b98cc8aef9ef9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   516a881b4869e       etcd-embed-certs-245911
	a167cf0b875d4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   4b00ff761c9ab       kube-controller-manager-embed-certs-245911
	fd11dd4f21927       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   c89eb9ddb497f       kube-scheduler-embed-certs-245911
	616b6473dde77       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   37281392c0f0c       kube-apiserver-embed-certs-245911
	
	
	==> coredns [448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-245911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-245911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=embed-certs-245911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_45_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:45:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-245911
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:54:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:51:05 +0000   Fri, 27 Sep 2024 01:45:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:51:05 +0000   Fri, 27 Sep 2024 01:45:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:51:05 +0000   Fri, 27 Sep 2024 01:45:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:51:05 +0000   Fri, 27 Sep 2024 01:45:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    embed-certs-245911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7110e728e2604f3689de21f5a2c2cd24
	  System UUID:                7110e728-e260-4f36-89de-21f5a2c2cd24
	  Boot ID:                    f8d88b27-0ecd-4578-9907-8f602caafdb0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-t4mxw                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 coredns-7c65d6cfc9-zp5f2                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 etcd-embed-certs-245911                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m21s
	  kube-system                 kube-apiserver-embed-certs-245911             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-245911    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-5l299                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-embed-certs-245911             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 metrics-server-6867b74b74-k28wz               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m13s  kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-245911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-245911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-245911 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s  node-controller  Node embed-certs-245911 event: Registered Node embed-certs-245911 in Controller
	
	
	==> dmesg <==
	[  +0.040104] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779433] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.425135] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.575703] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.513015] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.062208] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065732] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.182922] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.135628] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.297034] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.108190] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +1.836552] systemd-fstab-generator[924]: Ignoring "noauto" option for root device
	[  +0.059844] kauditd_printk_skb: 158 callbacks suppressed
	[Sep27 01:41] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.112314] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.141705] kauditd_printk_skb: 30 callbacks suppressed
	[Sep27 01:45] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.864571] systemd-fstab-generator[2561]: Ignoring "noauto" option for root device
	[  +4.546286] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.022854] systemd-fstab-generator[2884]: Ignoring "noauto" option for root device
	[  +5.109759] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.416966] systemd-fstab-generator[3069]: Ignoring "noauto" option for root device
	[Sep27 01:46] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031] <==
	{"level":"info","ts":"2024-09-27T01:45:43.734932Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","added-peer-id":"c2e3bdcd19c3f485","added-peer-peer-urls":["https://192.168.39.158:2380"]}
	{"level":"info","ts":"2024-09-27T01:45:43.734364Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-09-27T01:45:43.737389Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.158:2380"}
	{"level":"info","ts":"2024-09-27T01:45:43.735503Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c2e3bdcd19c3f485","initial-advertise-peer-urls":["https://192.168.39.158:2380"],"listen-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.158:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T01:45:43.735520Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T01:45:43.971410Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T01:45:43.971514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T01:45:43.971564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgPreVoteResp from c2e3bdcd19c3f485 at term 1"}
	{"level":"info","ts":"2024-09-27T01:45:43.971600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T01:45:43.971624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgVoteResp from c2e3bdcd19c3f485 at term 2"}
	{"level":"info","ts":"2024-09-27T01:45:43.971651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became leader at term 2"}
	{"level":"info","ts":"2024-09-27T01:45:43.971676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2e3bdcd19c3f485 elected leader c2e3bdcd19c3f485 at term 2"}
	{"level":"info","ts":"2024-09-27T01:45:43.975528Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c2e3bdcd19c3f485","local-member-attributes":"{Name:embed-certs-245911 ClientURLs:[https://192.168.39.158:2379]}","request-path":"/0/members/c2e3bdcd19c3f485/attributes","cluster-id":"632f2ed81879f448","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:45:43.975703Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:45:43.976053Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:45:43.977398Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:45:43.978113Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:45:43.982225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T01:45:43.982799Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:45:43.990258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.158:2379"}
	{"level":"info","ts":"2024-09-27T01:45:43.978569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:45:43.990411Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:45:44.005506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:45:44.005623Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:45:44.005711Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 01:55:08 up 14 min,  0 users,  load average: 0.43, 0.23, 0.12
	Linux embed-certs-245911 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d] <==
	W0927 01:45:37.967836       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:37.967836       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.010150       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.011671       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.012956       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.137677       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.165869       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.175641       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.218837       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.220108       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.221671       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.256106       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.302213       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.364769       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.414078       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.609656       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.658270       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.725974       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.788893       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.841199       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.983276       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:41.029570       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:41.047466       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:41.128567       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:41.164496       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90] <==
	W0927 01:50:47.098634       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:50:47.098747       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:50:47.099648       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:50:47.100748       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 01:51:47.100705       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:51:47.100775       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 01:51:47.100822       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:51:47.100914       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:51:47.102112       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:51:47.102172       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 01:53:47.102767       1 handler_proxy.go:99] no RequestInfo found in the context
	W0927 01:53:47.102809       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:53:47.103138       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0927 01:53:47.103240       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:53:47.104426       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:53:47.104481       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8] <==
	E0927 01:49:53.083429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:49:53.521823       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:50:23.089687       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:50:23.529785       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:50:53.097840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:50:53.537857       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:51:05.499705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-245911"
	E0927 01:51:23.106672       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:51:23.546823       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:51:45.976599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="316.663µs"
	E0927 01:51:53.114175       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:51:53.555092       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:51:58.977825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="139.548µs"
	E0927 01:52:23.120918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:52:23.563589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:52:53.128739       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:52:53.574546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:53:23.134965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:53:23.582260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:53:53.141903       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:53:53.591022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:54:23.149756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:54:23.599383       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:54:53.155946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:54:53.606942       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:45:54.310256       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:45:54.325397       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0927 01:45:54.325689       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:45:54.435731       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:45:54.435769       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:45:54.435792       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:45:54.439450       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:45:54.439811       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:45:54.439844       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:45:54.443057       1 config.go:199] "Starting service config controller"
	I0927 01:45:54.443114       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:45:54.443151       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:45:54.443155       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:45:54.444729       1 config.go:328] "Starting node config controller"
	I0927 01:45:54.444764       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:45:54.543579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:45:54.543667       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:45:54.545410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b] <==
	W0927 01:45:47.077962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 01:45:47.078049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.081310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 01:45:47.081386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.108256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:45:47.108306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.127942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:45:47.128100       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.145761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 01:45:47.146317       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.166809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:45:47.166905       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.222598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:45:47.222691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.342037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 01:45:47.342301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.410640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 01:45:47.411116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.438059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 01:45:47.438125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.446396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 01:45:47.446450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.533232       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:45:47.533688       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0927 01:45:50.220173       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 01:53:49 embed-certs-245911 kubelet[2891]: E0927 01:53:49.145374    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402029145059288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:53:59 embed-certs-245911 kubelet[2891]: E0927 01:53:59.147235    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402039146492878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:53:59 embed-certs-245911 kubelet[2891]: E0927 01:53:59.147404    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402039146492878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:02 embed-certs-245911 kubelet[2891]: E0927 01:54:02.961785    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 01:54:09 embed-certs-245911 kubelet[2891]: E0927 01:54:09.148919    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402049148525912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:09 embed-certs-245911 kubelet[2891]: E0927 01:54:09.148979    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402049148525912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:17 embed-certs-245911 kubelet[2891]: E0927 01:54:17.960795    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 01:54:19 embed-certs-245911 kubelet[2891]: E0927 01:54:19.151198    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402059149982050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:19 embed-certs-245911 kubelet[2891]: E0927 01:54:19.151229    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402059149982050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:29 embed-certs-245911 kubelet[2891]: E0927 01:54:29.152545    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402069152218313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:29 embed-certs-245911 kubelet[2891]: E0927 01:54:29.152578    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402069152218313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:29 embed-certs-245911 kubelet[2891]: E0927 01:54:29.960984    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 01:54:39 embed-certs-245911 kubelet[2891]: E0927 01:54:39.155082    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402079154286043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:39 embed-certs-245911 kubelet[2891]: E0927 01:54:39.155449    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402079154286043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:44 embed-certs-245911 kubelet[2891]: E0927 01:54:44.961687    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 01:54:48 embed-certs-245911 kubelet[2891]: E0927 01:54:48.986313    2891 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:54:48 embed-certs-245911 kubelet[2891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:54:48 embed-certs-245911 kubelet[2891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:54:48 embed-certs-245911 kubelet[2891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:54:48 embed-certs-245911 kubelet[2891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:54:49 embed-certs-245911 kubelet[2891]: E0927 01:54:49.158111    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402089157476131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:49 embed-certs-245911 kubelet[2891]: E0927 01:54:49.158146    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402089157476131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:56 embed-certs-245911 kubelet[2891]: E0927 01:54:56.961536    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 01:54:59 embed-certs-245911 kubelet[2891]: E0927 01:54:59.159887    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402099159089273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:59 embed-certs-245911 kubelet[2891]: E0927 01:54:59.160679    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402099159089273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484] <==
	I0927 01:45:56.003382       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:45:56.016609       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:45:56.016684       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:45:56.091633       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:45:56.093756       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-245911_c80d10c5-20f6-40f5-bd39-048655b6a15e!
	I0927 01:45:56.103058       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30015e7b-faab-4daf-b5dd-99a7fbb5b2f6", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-245911_c80d10c5-20f6-40f5-bd39-048655b6a15e became leader
	I0927 01:45:56.201836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-245911_c80d10c5-20f6-40f5-bd39-048655b6a15e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-245911 -n embed-certs-245911
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-245911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-k28wz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-245911 describe pod metrics-server-6867b74b74-k28wz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-245911 describe pod metrics-server-6867b74b74-k28wz: exit status 1 (66.697626ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-k28wz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-245911 describe pod metrics-server-6867b74b74-k28wz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521072 -n no-preload-521072
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-27 01:55:45.426294001 +0000 UTC m=+6061.611902256
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-521072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-521072 logs -n 25: (2.227773366s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| stop    | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:32 UTC |
	| start   | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:33 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-521072             | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-595331                              | cert-expiration-595331       | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| delete  | -p                                                     | disable-driver-mounts-630210 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | disable-driver-mounts-630210                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:35 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-245911            | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-368295  | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-521072                  | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-612261        | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-245911                 | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-612261             | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-368295       | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:37:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:37:48.335921   69534 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:37:48.336188   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336196   69534 out.go:358] Setting ErrFile to fd 2...
	I0927 01:37:48.336201   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336368   69534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:37:48.336901   69534 out.go:352] Setting JSON to false
	I0927 01:37:48.337754   69534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8413,"bootTime":1727392655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:37:48.337841   69534 start.go:139] virtualization: kvm guest
	I0927 01:37:48.340035   69534 out.go:177] * [default-k8s-diff-port-368295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:37:48.341151   69534 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:37:48.341211   69534 notify.go:220] Checking for updates...
	I0927 01:37:48.343607   69534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:37:48.344933   69534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:37:48.346113   69534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:37:48.347142   69534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:37:48.348261   69534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:37:48.349842   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:37:48.350212   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.350278   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.365272   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0927 01:37:48.365662   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.366137   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.366162   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.366548   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.366713   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.366938   69534 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:37:48.367236   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.367265   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.381678   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0927 01:37:48.382169   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.382627   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.382650   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.382911   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.383023   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.415092   69534 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:37:48.416340   69534 start.go:297] selected driver: kvm2
	I0927 01:37:48.416354   69534 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.416459   69534 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:37:48.417093   69534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.417164   69534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:37:48.432138   69534 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:37:48.432534   69534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:37:48.432563   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:37:48.432604   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:37:48.432635   69534 start.go:340] cluster config:
	{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.432737   69534 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.435057   69534 out.go:177] * Starting "default-k8s-diff-port-368295" primary control-plane node in "default-k8s-diff-port-368295" cluster
	I0927 01:37:48.436502   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:37:48.436543   69534 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 01:37:48.436557   69534 cache.go:56] Caching tarball of preloaded images
	I0927 01:37:48.436624   69534 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:37:48.436634   69534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 01:37:48.436718   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:37:48.436885   69534 start.go:360] acquireMachinesLock for default-k8s-diff-port-368295: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:37:50.823565   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:53.895575   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:59.975554   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:03.047567   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:09.127558   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:12.199592   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:18.279516   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:21.351643   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:27.435515   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:30.503604   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:36.583590   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:39.655593   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:45.735581   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:48.807587   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:54.887542   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:57.959601   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:04.039570   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:07.111555   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:13.191559   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:16.263625   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:22.343607   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:25.415561   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:31.495531   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:34.567598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:40.647577   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:43.719602   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:49.799620   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:52.871596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:58.951600   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:02.023635   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:08.103596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:11.175614   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:17.255583   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:20.327522   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:26.407598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:29.479580   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:32.484148   69234 start.go:364] duration metric: took 3m6.827897292s to acquireMachinesLock for "embed-certs-245911"
	I0927 01:40:32.484202   69234 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:32.484210   69234 fix.go:54] fixHost starting: 
	I0927 01:40:32.484708   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:32.484758   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:32.500356   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0927 01:40:32.500869   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:32.501356   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:40:32.501376   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:32.501678   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:32.501872   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:32.502014   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:40:32.503863   69234 fix.go:112] recreateIfNeeded on embed-certs-245911: state=Stopped err=<nil>
	I0927 01:40:32.503884   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	W0927 01:40:32.504047   69234 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:32.506829   69234 out.go:177] * Restarting existing kvm2 VM for "embed-certs-245911" ...
	I0927 01:40:32.481407   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:32.481445   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.481786   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:40:32.481815   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.482031   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:40:32.483999   68676 machine.go:96] duration metric: took 4m37.428764548s to provisionDockerMachine
	I0927 01:40:32.484048   68676 fix.go:56] duration metric: took 4m37.449461246s for fixHost
	I0927 01:40:32.484055   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 4m37.449534693s
	W0927 01:40:32.484075   68676 start.go:714] error starting host: provision: host is not running
	W0927 01:40:32.484176   68676 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0927 01:40:32.484183   68676 start.go:729] Will try again in 5 seconds ...
	I0927 01:40:32.508417   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Start
	I0927 01:40:32.508598   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring networks are active...
	I0927 01:40:32.509477   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network default is active
	I0927 01:40:32.509830   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network mk-embed-certs-245911 is active
	I0927 01:40:32.510208   69234 main.go:141] libmachine: (embed-certs-245911) Getting domain xml...
	I0927 01:40:32.510838   69234 main.go:141] libmachine: (embed-certs-245911) Creating domain...
	I0927 01:40:33.718381   69234 main.go:141] libmachine: (embed-certs-245911) Waiting to get IP...
	I0927 01:40:33.719223   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.719554   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.719611   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.719550   70125 retry.go:31] will retry after 265.21442ms: waiting for machine to come up
	I0927 01:40:33.986199   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.986700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.986728   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.986658   70125 retry.go:31] will retry after 308.926274ms: waiting for machine to come up
	I0927 01:40:34.297317   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.297734   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.297755   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.297697   70125 retry.go:31] will retry after 466.52815ms: waiting for machine to come up
	I0927 01:40:34.765171   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.765616   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.765643   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.765570   70125 retry.go:31] will retry after 510.417499ms: waiting for machine to come up
	I0927 01:40:35.277175   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.277547   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.277576   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.277488   70125 retry.go:31] will retry after 522.865286ms: waiting for machine to come up
	I0927 01:40:37.485696   68676 start.go:360] acquireMachinesLock for no-preload-521072: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:40:35.802177   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.802620   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.802646   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.802584   70125 retry.go:31] will retry after 611.490499ms: waiting for machine to come up
	I0927 01:40:36.415249   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:36.415733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:36.415793   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:36.415709   70125 retry.go:31] will retry after 744.420766ms: waiting for machine to come up
	I0927 01:40:37.161647   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:37.162076   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:37.162112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:37.162022   70125 retry.go:31] will retry after 1.464523837s: waiting for machine to come up
	I0927 01:40:38.627935   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:38.628275   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:38.628302   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:38.628237   70125 retry.go:31] will retry after 1.840524237s: waiting for machine to come up
	I0927 01:40:40.471433   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:40.471823   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:40.471851   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:40.471781   70125 retry.go:31] will retry after 1.9424331s: waiting for machine to come up
	I0927 01:40:42.416527   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:42.416978   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:42.417007   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:42.416935   70125 retry.go:31] will retry after 2.553410529s: waiting for machine to come up
	I0927 01:40:44.973083   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:44.973446   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:44.973465   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:44.973402   70125 retry.go:31] will retry after 3.286267983s: waiting for machine to come up
	I0927 01:40:48.260792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:48.261216   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:48.261241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:48.261179   70125 retry.go:31] will retry after 3.302667041s: waiting for machine to come up
	I0927 01:40:52.800240   69333 start.go:364] duration metric: took 3m25.347970249s to acquireMachinesLock for "old-k8s-version-612261"
	I0927 01:40:52.800310   69333 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:52.800317   69333 fix.go:54] fixHost starting: 
	I0927 01:40:52.800742   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:52.800800   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:52.818217   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0927 01:40:52.818644   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:52.819065   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:40:52.819086   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:52.819408   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:52.819544   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:40:52.819646   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetState
	I0927 01:40:52.820921   69333 fix.go:112] recreateIfNeeded on old-k8s-version-612261: state=Stopped err=<nil>
	I0927 01:40:52.820956   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	W0927 01:40:52.821110   69333 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:52.823209   69333 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-612261" ...
	I0927 01:40:51.567691   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568205   69234 main.go:141] libmachine: (embed-certs-245911) Found IP for machine: 192.168.39.158
	I0927 01:40:51.568241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has current primary IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568250   69234 main.go:141] libmachine: (embed-certs-245911) Reserving static IP address...
	I0927 01:40:51.568731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.568764   69234 main.go:141] libmachine: (embed-certs-245911) DBG | skip adding static IP to network mk-embed-certs-245911 - found existing host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"}
	I0927 01:40:51.568781   69234 main.go:141] libmachine: (embed-certs-245911) Reserved static IP address: 192.168.39.158
	I0927 01:40:51.568798   69234 main.go:141] libmachine: (embed-certs-245911) Waiting for SSH to be available...
	I0927 01:40:51.568806   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Getting to WaitForSSH function...
	I0927 01:40:51.570819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571139   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.571167   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571321   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH client type: external
	I0927 01:40:51.571370   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa (-rw-------)
	I0927 01:40:51.571401   69234 main.go:141] libmachine: (embed-certs-245911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:40:51.571414   69234 main.go:141] libmachine: (embed-certs-245911) DBG | About to run SSH command:
	I0927 01:40:51.571422   69234 main.go:141] libmachine: (embed-certs-245911) DBG | exit 0
	I0927 01:40:51.691525   69234 main.go:141] libmachine: (embed-certs-245911) DBG | SSH cmd err, output: <nil>: 
	I0927 01:40:51.691953   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetConfigRaw
	I0927 01:40:51.692573   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:51.695121   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695541   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.695572   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695871   69234 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/config.json ...
	I0927 01:40:51.696087   69234 machine.go:93] provisionDockerMachine start ...
	I0927 01:40:51.696109   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:51.696312   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.698740   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699086   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.699112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699229   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.699415   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699552   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699679   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.699810   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.699998   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.700011   69234 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:40:51.799534   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:40:51.799559   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799764   69234 buildroot.go:166] provisioning hostname "embed-certs-245911"
	I0927 01:40:51.799792   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.802464   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.802844   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802960   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.803131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803290   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803502   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.803672   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.803868   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.803888   69234 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-245911 && echo "embed-certs-245911" | sudo tee /etc/hostname
	I0927 01:40:51.917988   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-245911
	
	I0927 01:40:51.918019   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.920484   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.920800   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.920831   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.921041   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.921224   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921383   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921511   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.921693   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.921883   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.921901   69234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-245911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-245911/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-245911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:40:52.028582   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:52.028609   69234 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:40:52.028672   69234 buildroot.go:174] setting up certificates
	I0927 01:40:52.028686   69234 provision.go:84] configureAuth start
	I0927 01:40:52.028704   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:52.029001   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.031742   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032088   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.032117   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032273   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.034392   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.034754   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034905   69234 provision.go:143] copyHostCerts
	I0927 01:40:52.034956   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:40:52.034969   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:40:52.035042   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:40:52.035172   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:40:52.035185   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:40:52.035224   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:40:52.035319   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:40:52.035329   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:40:52.035363   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:40:52.035433   69234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-245911 san=[127.0.0.1 192.168.39.158 embed-certs-245911 localhost minikube]
	I0927 01:40:52.206591   69234 provision.go:177] copyRemoteCerts
	I0927 01:40:52.206657   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:40:52.206724   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.209445   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209770   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.209792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209995   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.210234   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.210416   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.210578   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.290176   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:40:52.313645   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:40:52.336446   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:40:52.359182   69234 provision.go:87] duration metric: took 330.481958ms to configureAuth
	I0927 01:40:52.359214   69234 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:40:52.359464   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:40:52.359551   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.362163   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362488   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.362513   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362670   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.362826   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.362976   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.363133   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.363334   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.363532   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.363553   69234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:40:52.574326   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:40:52.574354   69234 machine.go:96] duration metric: took 878.253718ms to provisionDockerMachine
	I0927 01:40:52.574368   69234 start.go:293] postStartSetup for "embed-certs-245911" (driver="kvm2")
	I0927 01:40:52.574381   69234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:40:52.574398   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.574688   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:40:52.574714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.577727   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578035   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.578060   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578227   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.578411   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.578555   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.578735   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.658636   69234 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:40:52.663048   69234 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:40:52.663077   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:40:52.663147   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:40:52.663223   69234 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:40:52.663322   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:40:52.673347   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:52.697092   69234 start.go:296] duration metric: took 122.71069ms for postStartSetup
	I0927 01:40:52.697126   69234 fix.go:56] duration metric: took 20.212915975s for fixHost
	I0927 01:40:52.697145   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.699817   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700173   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.700202   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700364   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.700558   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700735   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700921   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.701097   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.701269   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.701285   69234 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:40:52.800080   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401252.775762391
	
	I0927 01:40:52.800102   69234 fix.go:216] guest clock: 1727401252.775762391
	I0927 01:40:52.800111   69234 fix.go:229] Guest: 2024-09-27 01:40:52.775762391 +0000 UTC Remote: 2024-09-27 01:40:52.697129165 +0000 UTC m=+207.179045808 (delta=78.633226ms)
	I0927 01:40:52.800145   69234 fix.go:200] guest clock delta is within tolerance: 78.633226ms
	I0927 01:40:52.800152   69234 start.go:83] releasing machines lock for "embed-certs-245911", held for 20.315972034s
	I0927 01:40:52.800183   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.800495   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.803196   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803657   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.803700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803874   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804419   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804610   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804731   69234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:40:52.804771   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.804813   69234 ssh_runner.go:195] Run: cat /version.json
	I0927 01:40:52.804837   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.807320   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807346   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807680   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807759   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807807   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807916   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808070   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808150   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808262   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808331   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808384   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808468   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.808522   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.908963   69234 ssh_runner.go:195] Run: systemctl --version
	I0927 01:40:52.915158   69234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:40:53.067605   69234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:40:53.074171   69234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:40:53.074241   69234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:40:53.091718   69234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:40:53.091742   69234 start.go:495] detecting cgroup driver to use...
	I0927 01:40:53.091813   69234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:40:53.108730   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:40:53.122920   69234 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:40:53.122984   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:40:53.137487   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:40:53.152420   69234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:40:53.269491   69234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:40:53.417893   69234 docker.go:233] disabling docker service ...
	I0927 01:40:53.417951   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:40:53.442201   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:40:53.459920   69234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:40:53.589768   69234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:40:53.719203   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:40:53.733145   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:40:53.751853   69234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:40:53.751919   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.763230   69234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:40:53.763294   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.774864   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.786149   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.797167   69234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:40:53.808495   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.819285   69234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.838497   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.850490   69234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:40:53.860309   69234 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:40:53.860377   69234 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:40:53.875533   69234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:40:53.885752   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:54.014352   69234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:40:54.107866   69234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:40:54.107926   69234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:40:54.113206   69234 start.go:563] Will wait 60s for crictl version
	I0927 01:40:54.113256   69234 ssh_runner.go:195] Run: which crictl
	I0927 01:40:54.117229   69234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:40:54.156365   69234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:40:54.156459   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.183974   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.214440   69234 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:40:54.215714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:54.218624   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.218975   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:54.219013   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.219180   69234 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 01:40:54.223450   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:54.236761   69234 kubeadm.go:883] updating cluster {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:40:54.236923   69234 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:40:54.236989   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:54.276635   69234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:40:54.276708   69234 ssh_runner.go:195] Run: which lz4
	I0927 01:40:54.281055   69234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:40:54.285439   69234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:40:54.285472   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:40:52.824650   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .Start
	I0927 01:40:52.824802   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring networks are active...
	I0927 01:40:52.825590   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network default is active
	I0927 01:40:52.825908   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network mk-old-k8s-version-612261 is active
	I0927 01:40:52.826326   69333 main.go:141] libmachine: (old-k8s-version-612261) Getting domain xml...
	I0927 01:40:52.827108   69333 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:40:54.071322   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting to get IP...
	I0927 01:40:54.072357   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.072756   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.072821   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.072738   70279 retry.go:31] will retry after 264.648837ms: waiting for machine to come up
	I0927 01:40:54.339366   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.339799   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.339827   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.339731   70279 retry.go:31] will retry after 343.432635ms: waiting for machine to come up
	I0927 01:40:54.685260   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.685746   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.685780   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.685714   70279 retry.go:31] will retry after 455.276623ms: waiting for machine to come up
	I0927 01:40:55.142206   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.142679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.142708   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.142637   70279 retry.go:31] will retry after 419.074502ms: waiting for machine to come up
	I0927 01:40:55.563324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.565342   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.565368   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.565287   70279 retry.go:31] will retry after 587.161471ms: waiting for machine to come up
	I0927 01:40:56.154584   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.155182   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.155220   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.155109   70279 retry.go:31] will retry after 782.426926ms: waiting for machine to come up
	I0927 01:40:56.938784   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.939201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.939228   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.939132   70279 retry.go:31] will retry after 781.231902ms: waiting for machine to come up
	I0927 01:40:55.723619   69234 crio.go:462] duration metric: took 1.442589436s to copy over tarball
	I0927 01:40:55.723705   69234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:40:57.775673   69234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051936146s)
	I0927 01:40:57.775697   69234 crio.go:469] duration metric: took 2.052045538s to extract the tarball
	I0927 01:40:57.775704   69234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:40:57.812769   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:57.853219   69234 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:40:57.853240   69234 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:40:57.853248   69234 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0927 01:40:57.853354   69234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-245911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:40:57.853495   69234 ssh_runner.go:195] Run: crio config
	I0927 01:40:57.908273   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:40:57.908301   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:40:57.908322   69234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:40:57.908356   69234 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-245911 NodeName:embed-certs-245911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:40:57.908542   69234 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-245911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:40:57.908613   69234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:40:57.918923   69234 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:40:57.919021   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:40:57.928576   69234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0927 01:40:57.945515   69234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:40:57.962239   69234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0927 01:40:57.979722   69234 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0927 01:40:57.983709   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:57.996181   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:58.119502   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:40:58.137022   69234 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911 for IP: 192.168.39.158
	I0927 01:40:58.137048   69234 certs.go:194] generating shared ca certs ...
	I0927 01:40:58.137068   69234 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:40:58.137250   69234 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:40:58.137312   69234 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:40:58.137324   69234 certs.go:256] generating profile certs ...
	I0927 01:40:58.137444   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/client.key
	I0927 01:40:58.137522   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key.e289c840
	I0927 01:40:58.137574   69234 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key
	I0927 01:40:58.137731   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:40:58.137774   69234 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:40:58.137787   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:40:58.137819   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:40:58.137850   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:40:58.137883   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:40:58.137928   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:58.138551   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:40:58.179399   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:40:58.211297   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:40:58.245549   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:40:58.276837   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0927 01:40:58.313750   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:40:58.338145   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:40:58.361373   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:40:58.384790   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:40:58.407617   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:40:58.430621   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:40:58.453382   69234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:40:58.470177   69234 ssh_runner.go:195] Run: openssl version
	I0927 01:40:58.476280   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:40:58.489039   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493726   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493780   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.499856   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:40:58.511032   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:40:58.521694   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.525991   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.526031   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.531619   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:40:58.542017   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:40:58.552591   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557047   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557086   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.562874   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:40:58.574052   69234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:40:58.578537   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:40:58.584323   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:40:58.590033   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:40:58.596013   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:40:58.601572   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:40:58.606980   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:40:58.612554   69234 kubeadm.go:392] StartCluster: {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:40:58.612648   69234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:40:58.612704   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.649228   69234 cri.go:89] found id: ""
	I0927 01:40:58.649306   69234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:40:58.661599   69234 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:40:58.661628   69234 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:40:58.661688   69234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:40:58.671907   69234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:40:58.672851   69234 kubeconfig.go:125] found "embed-certs-245911" server: "https://192.168.39.158:8443"
	I0927 01:40:58.674753   69234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:40:58.684614   69234 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.158
	I0927 01:40:58.684643   69234 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:40:58.684652   69234 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:40:58.684715   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.726714   69234 cri.go:89] found id: ""
	I0927 01:40:58.726816   69234 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:40:58.743675   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:40:58.753456   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:40:58.753485   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:40:58.753535   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:40:58.762724   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:40:58.762821   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:40:58.772558   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:40:58.781732   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:40:58.781790   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:40:58.791109   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.800066   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:40:58.800127   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.809338   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:40:58.818214   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:40:58.818260   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:40:58.828049   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:40:58.837606   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:58.942395   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.758951   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.966377   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.036702   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.126663   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:00.126743   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:40:57.722147   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:57.722637   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:57.722657   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:57.722593   70279 retry.go:31] will retry after 1.223133601s: waiting for machine to come up
	I0927 01:40:58.947836   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:58.948362   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:58.948388   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:58.948326   70279 retry.go:31] will retry after 1.155368003s: waiting for machine to come up
	I0927 01:41:00.105812   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:00.106288   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:00.106356   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:00.106280   70279 retry.go:31] will retry after 2.324904017s: waiting for machine to come up
	I0927 01:41:00.627542   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.126971   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.626940   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.127478   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.176746   69234 api_server.go:72] duration metric: took 2.050081672s to wait for apiserver process to appear ...
	I0927 01:41:02.176775   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:02.176798   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:02.177442   69234 api_server.go:269] stopped: https://192.168.39.158:8443/healthz: Get "https://192.168.39.158:8443/healthz": dial tcp 192.168.39.158:8443: connect: connection refused
	I0927 01:41:02.677488   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.824718   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.824748   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:04.824763   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.850790   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.850820   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:05.177167   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.201660   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.201696   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:02.432597   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:02.433066   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:02.433096   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:02.433026   70279 retry.go:31] will retry after 2.598889471s: waiting for machine to come up
	I0927 01:41:05.034614   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:05.035001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:05.035023   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:05.034973   70279 retry.go:31] will retry after 3.064943329s: waiting for machine to come up
	I0927 01:41:05.677514   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.683506   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.683543   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.177064   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.181304   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.181339   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.676872   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.681269   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.681297   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.176902   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.181397   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.181425   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.677457   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.682057   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.682087   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:08.177696   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:08.181752   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:41:08.188257   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:08.188278   69234 api_server.go:131] duration metric: took 6.011495616s to wait for apiserver health ...
	I0927 01:41:08.188285   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:41:08.188291   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:08.190206   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:08.191584   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:08.202370   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:08.224843   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:08.234247   69234 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:08.234275   69234 system_pods.go:61] "coredns-7c65d6cfc9-f2vxv" [3eed941e-e943-490b-a0a8-d543cec18a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:08.234284   69234 system_pods.go:61] "etcd-embed-certs-245911" [f88581ff-3747-4fe5-a4a2-6259c3b4554e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:08.234291   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [3f1efb25-6e30-4d5f-baba-3e98b6fe531e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:08.234298   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [a624fc8d-fbe3-4b63-8a88-5f8069b21095] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:08.234302   69234 system_pods.go:61] "kube-proxy-pjf8v" [a1b76e67-803a-43fe-bff6-a4b0ddc246a1] Running
	I0927 01:41:08.234309   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [0f7c146b-e2b7-4110-b010-f4599d0da410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:08.234313   69234 system_pods.go:61] "metrics-server-6867b74b74-k8mdf" [6d1e68fb-5187-4bc6-abdb-44f598e351c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:08.234317   69234 system_pods.go:61] "storage-provisioner" [dc0a7806-bee8-4127-8218-b2e48fa8500b] Running
	I0927 01:41:08.234323   69234 system_pods.go:74] duration metric: took 9.462578ms to wait for pod list to return data ...
	I0927 01:41:08.234333   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:08.238433   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:08.238455   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:08.238468   69234 node_conditions.go:105] duration metric: took 4.128775ms to run NodePressure ...
	I0927 01:41:08.238483   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:08.502161   69234 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506267   69234 kubeadm.go:739] kubelet initialised
	I0927 01:41:08.506290   69234 kubeadm.go:740] duration metric: took 4.099692ms waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506299   69234 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:08.510964   69234 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.515262   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515279   69234 pod_ready.go:82] duration metric: took 4.294632ms for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.515286   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515298   69234 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.519627   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519641   69234 pod_ready.go:82] duration metric: took 4.313975ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.519648   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519653   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.523152   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523165   69234 pod_ready.go:82] duration metric: took 3.50412ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.523177   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523186   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.628811   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628847   69234 pod_ready.go:82] duration metric: took 105.648464ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.628859   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628868   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027358   69234 pod_ready.go:93] pod "kube-proxy-pjf8v" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:09.027383   69234 pod_ready.go:82] duration metric: took 398.507928ms for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027393   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.101834   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:08.102324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:08.102358   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:08.102283   70279 retry.go:31] will retry after 4.242138543s: waiting for machine to come up
	I0927 01:41:13.708458   69534 start.go:364] duration metric: took 3m25.271525685s to acquireMachinesLock for "default-k8s-diff-port-368295"
	I0927 01:41:13.708525   69534 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:13.708533   69534 fix.go:54] fixHost starting: 
	I0927 01:41:13.708923   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:13.708979   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:13.726306   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0927 01:41:13.726732   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:13.727228   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:41:13.727252   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:13.727579   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:13.727781   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:13.727975   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:41:13.729621   69534 fix.go:112] recreateIfNeeded on default-k8s-diff-port-368295: state=Stopped err=<nil>
	I0927 01:41:13.729657   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	W0927 01:41:13.729826   69534 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:13.731730   69534 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-368295" ...
	I0927 01:41:12.347378   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.347831   69333 main.go:141] libmachine: (old-k8s-version-612261) Found IP for machine: 192.168.72.129
	I0927 01:41:12.347855   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserving static IP address...
	I0927 01:41:12.347872   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has current primary IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.348468   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.348494   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserved static IP address: 192.168.72.129
	I0927 01:41:12.348507   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | skip adding static IP to network mk-old-k8s-version-612261 - found existing host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"}
	I0927 01:41:12.348518   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting for SSH to be available...
	I0927 01:41:12.348537   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Getting to WaitForSSH function...
	I0927 01:41:12.350917   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351287   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.351335   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351464   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH client type: external
	I0927 01:41:12.351485   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa (-rw-------)
	I0927 01:41:12.351516   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:12.351525   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | About to run SSH command:
	I0927 01:41:12.351533   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | exit 0
	I0927 01:41:12.471347   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:12.471724   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:41:12.472352   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.474886   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475299   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.475340   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475628   69333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:41:12.475857   69333 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:12.475879   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:12.476115   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.478594   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.478918   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.478945   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.479126   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.479340   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479536   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479695   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.479859   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.480093   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.480116   69333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:12.579536   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:12.579562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579785   69333 buildroot.go:166] provisioning hostname "old-k8s-version-612261"
	I0927 01:41:12.579798   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579965   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.582679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.583027   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583166   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.583372   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583727   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.583924   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.584169   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.584187   69333 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-612261 && echo "old-k8s-version-612261" | sudo tee /etc/hostname
	I0927 01:41:12.702223   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-612261
	
	I0927 01:41:12.702252   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.705201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705564   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.705601   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705817   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.706012   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706154   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706344   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.706538   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.706720   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.706738   69333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-612261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-612261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-612261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:12.816316   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:12.816343   69333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:12.816376   69333 buildroot.go:174] setting up certificates
	I0927 01:41:12.816386   69333 provision.go:84] configureAuth start
	I0927 01:41:12.816394   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.816678   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.819190   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819487   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.819511   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.821843   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822166   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.822203   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822382   69333 provision.go:143] copyHostCerts
	I0927 01:41:12.822453   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:12.822466   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:12.822533   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:12.822641   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:12.822650   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:12.822682   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:12.822756   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:12.822766   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:12.822792   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:12.822859   69333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-612261 san=[127.0.0.1 192.168.72.129 localhost minikube old-k8s-version-612261]
	I0927 01:41:13.054632   69333 provision.go:177] copyRemoteCerts
	I0927 01:41:13.054706   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:13.054740   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.057895   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058296   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.058329   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058478   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.058696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.058907   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.059062   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.146378   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:13.176435   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:41:13.208974   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 01:41:13.240179   69333 provision.go:87] duration metric: took 423.77487ms to configureAuth
	I0927 01:41:13.240211   69333 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:13.240412   69333 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:41:13.240498   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.243514   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.243963   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.243991   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.244174   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.244419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244641   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244838   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.245039   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.245263   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.245284   69333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:13.476519   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:13.476545   69333 machine.go:96] duration metric: took 1.000674334s to provisionDockerMachine
	I0927 01:41:13.476558   69333 start.go:293] postStartSetup for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:41:13.476574   69333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:13.476593   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.476914   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:13.476942   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.479326   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479662   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.479686   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479835   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.480027   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.480182   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.480337   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.563321   69333 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:13.567844   69333 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:13.567867   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:13.567929   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:13.568012   69333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:13.568109   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:13.578453   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:13.603888   69333 start.go:296] duration metric: took 127.316429ms for postStartSetup
	I0927 01:41:13.603924   69333 fix.go:56] duration metric: took 20.803606957s for fixHost
	I0927 01:41:13.603948   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.606500   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.606921   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.606949   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.607189   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.607419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607600   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607726   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.608048   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.608234   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.608245   69333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:13.708261   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401273.683707076
	
	I0927 01:41:13.708284   69333 fix.go:216] guest clock: 1727401273.683707076
	I0927 01:41:13.708293   69333 fix.go:229] Guest: 2024-09-27 01:41:13.683707076 +0000 UTC Remote: 2024-09-27 01:41:13.603929237 +0000 UTC m=+226.291347697 (delta=79.777839ms)
	I0927 01:41:13.708348   69333 fix.go:200] guest clock delta is within tolerance: 79.777839ms
	I0927 01:41:13.708357   69333 start.go:83] releasing machines lock for "old-k8s-version-612261", held for 20.90807118s
	I0927 01:41:13.708392   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.708665   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:13.711474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.711873   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.711905   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.712035   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712569   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712748   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712832   69333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:13.712878   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.712949   69333 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:13.712971   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.715681   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.715820   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716024   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716043   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716200   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716225   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716235   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716370   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716487   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716548   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716622   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716728   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716779   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.716859   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.826638   69333 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:13.832901   69333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:13.986132   69333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:13.992644   69333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:13.992728   69333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:14.008962   69333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:14.008991   69333 start.go:495] detecting cgroup driver to use...
	I0927 01:41:14.009051   69333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:14.025047   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:14.040807   69333 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:14.040857   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:14.055972   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:14.072654   69333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:14.210869   69333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:14.403536   69333 docker.go:233] disabling docker service ...
	I0927 01:41:14.403596   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:14.421549   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:14.436288   69333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:14.569634   69333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:14.701517   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:14.716794   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:14.740622   69333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:41:14.740685   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.756563   69333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:14.756626   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.768952   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.781314   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.793578   69333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:14.806302   69333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:14.822967   69333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:14.823036   69333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:14.837673   69333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:14.848486   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:14.988181   69333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:15.100581   69333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:15.100664   69333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:15.105816   69333 start.go:563] Will wait 60s for crictl version
	I0927 01:41:15.105883   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:15.110375   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:15.154944   69333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:15.155039   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.188172   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.220410   69333 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:41:11.033747   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:13.038930   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:15.035610   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:15.035636   69234 pod_ready.go:82] duration metric: took 6.008237321s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.035645   69234 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.221508   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:15.224474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.224855   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:15.224884   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.225126   69333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:15.229555   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:15.244862   69333 kubeadm.go:883] updating cluster {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:15.245007   69333 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:41:15.245070   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:15.298422   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:15.298501   69333 ssh_runner.go:195] Run: which lz4
	I0927 01:41:15.302771   69333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:15.307360   69333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:15.307398   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:41:17.053272   69333 crio.go:462] duration metric: took 1.750548806s to copy over tarball
	I0927 01:41:17.053354   69333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:13.732810   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Start
	I0927 01:41:13.732979   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring networks are active...
	I0927 01:41:13.733749   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network default is active
	I0927 01:41:13.734076   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network mk-default-k8s-diff-port-368295 is active
	I0927 01:41:13.734425   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Getting domain xml...
	I0927 01:41:13.734997   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Creating domain...
	I0927 01:41:15.073415   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting to get IP...
	I0927 01:41:15.074278   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074774   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074850   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.074757   70444 retry.go:31] will retry after 231.356774ms: waiting for machine to come up
	I0927 01:41:15.308474   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309058   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.308989   70444 retry.go:31] will retry after 252.762152ms: waiting for machine to come up
	I0927 01:41:15.563638   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564173   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564212   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.564130   70444 retry.go:31] will retry after 341.067908ms: waiting for machine to come up
	I0927 01:41:15.906735   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907138   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907168   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.907091   70444 retry.go:31] will retry after 385.816363ms: waiting for machine to come up
	I0927 01:41:16.294523   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295246   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295268   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.295192   70444 retry.go:31] will retry after 575.812339ms: waiting for machine to come up
	I0927 01:41:16.873050   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873574   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873601   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.873520   70444 retry.go:31] will retry after 661.914855ms: waiting for machine to come up
	I0927 01:41:17.537039   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537516   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537544   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:17.537467   70444 retry.go:31] will retry after 959.195147ms: waiting for machine to come up
	I0927 01:41:17.043983   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.543159   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:20.066231   69333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012846531s)
	I0927 01:41:20.066257   69333 crio.go:469] duration metric: took 3.012954388s to extract the tarball
	I0927 01:41:20.066265   69333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:20.112486   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:20.152620   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:20.152647   69333 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:20.152723   69333 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.152754   69333 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.152789   69333 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.152813   69333 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.152816   69333 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.152763   69333 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.152938   69333 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:41:20.152940   69333 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154747   69333 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.154752   69333 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.154886   69333 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.154925   69333 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.154930   69333 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.154934   69333 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:41:20.316172   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.316352   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:41:20.319986   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.331224   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.342010   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.355732   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.355739   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.446420   69333 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:41:20.446477   69333 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.446529   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.469134   69333 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:41:20.469183   69333 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.469231   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.470229   69333 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:41:20.470264   69333 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:41:20.470310   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.477952   69333 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:41:20.477991   69333 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.478034   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.519340   69333 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:41:20.519391   69333 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.519454   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538237   69333 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:41:20.538256   69333 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:41:20.538293   69333 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.538298   69333 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.538389   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.538438   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.538489   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656448   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.656508   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.656542   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.656573   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.656635   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.656704   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656740   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.818479   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.818494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.818581   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.878325   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:41:20.878480   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.878494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.878585   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.885061   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.885168   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.898628   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:41:20.994147   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:41:20.994175   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:41:20.994211   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:21.016210   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:41:21.016289   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:41:21.035051   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:41:21.374949   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:21.520726   69333 cache_images.go:92] duration metric: took 1.368058485s to LoadCachedImages
	W0927 01:41:21.520817   69333 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0927 01:41:21.520833   69333 kubeadm.go:934] updating node { 192.168.72.129 8443 v1.20.0 crio true true} ...
	I0927 01:41:21.520951   69333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-612261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:21.521035   69333 ssh_runner.go:195] Run: crio config
	I0927 01:41:21.571651   69333 cni.go:84] Creating CNI manager for ""
	I0927 01:41:21.571677   69333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:21.571688   69333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:21.571712   69333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-612261 NodeName:old-k8s-version-612261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:41:21.571882   69333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-612261"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:21.571958   69333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:41:21.582735   69333 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:21.582802   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:21.593329   69333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0927 01:41:21.615040   69333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:21.636564   69333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 01:41:21.657275   69333 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:21.661675   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:21.674587   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:21.814300   69333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:21.834133   69333 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261 for IP: 192.168.72.129
	I0927 01:41:21.834163   69333 certs.go:194] generating shared ca certs ...
	I0927 01:41:21.834182   69333 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:21.834380   69333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:21.834437   69333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:21.834450   69333 certs.go:256] generating profile certs ...
	I0927 01:41:21.834558   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key
	I0927 01:41:21.834630   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e
	I0927 01:41:21.834676   69333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key
	I0927 01:41:21.834819   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:21.834859   69333 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:21.834873   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:21.834904   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:21.834937   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:21.834973   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:21.835023   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:21.835864   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:21.866955   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:21.902991   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:21.928957   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:21.957505   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:41:21.984055   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:22.013191   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:22.041745   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:41:22.069680   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:22.104139   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:22.130348   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:22.157976   69333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:22.177818   69333 ssh_runner.go:195] Run: openssl version
	I0927 01:41:22.184389   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:22.196133   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201047   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201120   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.207245   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:22.219033   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:22.230331   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235000   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235054   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.240963   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:22.252022   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:22.263197   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268023   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268100   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.274086   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:22.285387   69333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:22.290487   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:22.296953   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:22.303095   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:22.310001   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:22.316346   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:22.322559   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:22.328931   69333 kubeadm.go:392] StartCluster: {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:22.329015   69333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:22.329081   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:18.498695   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499234   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:18.499187   70444 retry.go:31] will retry after 932.004828ms: waiting for machine to come up
	I0927 01:41:19.432487   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432885   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432912   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:19.432844   70444 retry.go:31] will retry after 1.595543978s: waiting for machine to come up
	I0927 01:41:21.030048   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030572   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030598   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:21.030526   70444 retry.go:31] will retry after 1.93010855s: waiting for machine to come up
	I0927 01:41:22.963833   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964303   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964334   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:22.964254   70444 retry.go:31] will retry after 2.81720725s: waiting for machine to come up
	I0927 01:41:21.757497   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:24.043965   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:22.368989   69333 cri.go:89] found id: ""
	I0927 01:41:22.369059   69333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:22.379818   69333 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:22.379841   69333 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:22.379897   69333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:22.392278   69333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:22.393236   69333 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:41:22.393856   69333 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-14935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-612261" cluster setting kubeconfig missing "old-k8s-version-612261" context setting]
	I0927 01:41:22.394733   69333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:22.404625   69333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:22.415376   69333 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.129
	I0927 01:41:22.415414   69333 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:22.415427   69333 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:22.415487   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:22.452749   69333 cri.go:89] found id: ""
	I0927 01:41:22.452829   69333 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:22.469164   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:22.480018   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:22.480038   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:22.480092   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:41:22.490501   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:22.490562   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:22.500330   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:41:22.509612   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:22.509681   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:22.520064   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.529864   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:22.529921   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.540563   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:41:22.556739   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:22.556797   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:22.572858   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:22.583366   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:22.709007   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.468461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.714890   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.865174   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.959048   69333 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:23.959140   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.460104   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.959462   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.460143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.959473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.460051   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.960121   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.784030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784429   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784456   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:25.784393   70444 retry.go:31] will retry after 2.844872797s: waiting for machine to come up
	I0927 01:41:26.544176   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:29.042297   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:27.459491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:27.959944   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.459636   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.959766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.459410   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.959439   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.460176   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.959810   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.459492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.959966   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.632445   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632905   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632930   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:28.632866   70444 retry.go:31] will retry after 3.566248996s: waiting for machine to come up
	I0927 01:41:32.200424   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200804   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Found IP for machine: 192.168.61.83
	I0927 01:41:32.200832   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has current primary IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200841   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserving static IP address...
	I0927 01:41:32.201137   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.201151   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserved static IP address: 192.168.61.83
	I0927 01:41:32.201164   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | skip adding static IP to network mk-default-k8s-diff-port-368295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"}
	I0927 01:41:32.201177   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Getting to WaitForSSH function...
	I0927 01:41:32.201185   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for SSH to be available...
	I0927 01:41:32.203258   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203542   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.203571   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203674   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH client type: external
	I0927 01:41:32.203704   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa (-rw-------)
	I0927 01:41:32.203743   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:32.203763   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | About to run SSH command:
	I0927 01:41:32.203783   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | exit 0
	I0927 01:41:32.327131   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:32.327499   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetConfigRaw
	I0927 01:41:32.328140   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.330387   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.330769   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.330801   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.331054   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:41:32.331257   69534 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:32.331279   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:32.331505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.333514   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333799   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.333825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333940   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.334101   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334267   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334359   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.334509   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.334700   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.334709   69534 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:32.439884   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:32.439921   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440126   69534 buildroot.go:166] provisioning hostname "default-k8s-diff-port-368295"
	I0927 01:41:32.440149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440346   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.443385   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443707   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.443742   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443917   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.444093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444266   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444427   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.444606   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.444793   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.444809   69534 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-368295 && echo "default-k8s-diff-port-368295" | sudo tee /etc/hostname
	I0927 01:41:32.570447   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-368295
	
	I0927 01:41:32.570479   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.573194   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573472   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.573512   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573699   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.573942   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574097   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.574430   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.574623   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.574647   69534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-368295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-368295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-368295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:32.693082   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:32.693107   69534 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:32.693140   69534 buildroot.go:174] setting up certificates
	I0927 01:41:32.693149   69534 provision.go:84] configureAuth start
	I0927 01:41:32.693160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.693407   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.696156   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696498   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.696522   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696693   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.698894   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699229   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.699257   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699399   69534 provision.go:143] copyHostCerts
	I0927 01:41:32.699451   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:32.699464   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:32.699530   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:32.699639   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:32.699653   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:32.699681   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:32.699751   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:32.699761   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:32.699785   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:32.699848   69534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-368295 san=[127.0.0.1 192.168.61.83 default-k8s-diff-port-368295 localhost minikube]
	I0927 01:41:32.887727   69534 provision.go:177] copyRemoteCerts
	I0927 01:41:32.887792   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:32.887825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.890435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890768   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.890797   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890956   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.891128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.891252   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.891373   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:32.973705   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:32.998434   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0927 01:41:33.023552   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:33.048884   69534 provision.go:87] duration metric: took 355.724209ms to configureAuth
	I0927 01:41:33.048910   69534 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:33.049080   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:33.049149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.051738   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052080   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.052133   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052364   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.052578   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052726   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.053031   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.053265   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.053283   69534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:33.292126   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:33.292148   69534 machine.go:96] duration metric: took 960.878234ms to provisionDockerMachine
	I0927 01:41:33.292159   69534 start.go:293] postStartSetup for "default-k8s-diff-port-368295" (driver="kvm2")
	I0927 01:41:33.292171   69534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:33.292188   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.292511   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:33.292539   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.295356   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.295759   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295936   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.296100   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.296314   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.296498   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.528391   68676 start.go:364] duration metric: took 56.042651871s to acquireMachinesLock for "no-preload-521072"
	I0927 01:41:33.528435   68676 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:33.528445   68676 fix.go:54] fixHost starting: 
	I0927 01:41:33.528858   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:33.528890   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:33.547391   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0927 01:41:33.547852   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:33.548343   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:41:33.548371   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:33.548713   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:33.548907   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:33.549064   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:41:33.550898   68676 fix.go:112] recreateIfNeeded on no-preload-521072: state=Stopped err=<nil>
	I0927 01:41:33.550923   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	W0927 01:41:33.551084   68676 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:33.553090   68676 out.go:177] * Restarting existing kvm2 VM for "no-preload-521072" ...
	I0927 01:41:33.554429   68676 main.go:141] libmachine: (no-preload-521072) Calling .Start
	I0927 01:41:33.554613   68676 main.go:141] libmachine: (no-preload-521072) Ensuring networks are active...
	I0927 01:41:33.555401   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network default is active
	I0927 01:41:33.555858   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network mk-no-preload-521072 is active
	I0927 01:41:33.556350   68676 main.go:141] libmachine: (no-preload-521072) Getting domain xml...
	I0927 01:41:33.557057   68676 main.go:141] libmachine: (no-preload-521072) Creating domain...
	I0927 01:41:34.830052   68676 main.go:141] libmachine: (no-preload-521072) Waiting to get IP...
	I0927 01:41:34.830807   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:34.831255   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:34.831340   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:34.831244   70637 retry.go:31] will retry after 267.615794ms: waiting for machine to come up
	I0927 01:41:33.378613   69534 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:33.383491   69534 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:33.383517   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:33.383590   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:33.383695   69534 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:33.383810   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:33.395134   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:33.420441   69534 start.go:296] duration metric: took 128.270045ms for postStartSetup
	I0927 01:41:33.420481   69534 fix.go:56] duration metric: took 19.711948387s for fixHost
	I0927 01:41:33.420505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.422860   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423170   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.423198   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423333   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.423517   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423676   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.423987   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.424139   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.424153   69534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:33.528250   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401293.484458762
	
	I0927 01:41:33.528271   69534 fix.go:216] guest clock: 1727401293.484458762
	I0927 01:41:33.528278   69534 fix.go:229] Guest: 2024-09-27 01:41:33.484458762 +0000 UTC Remote: 2024-09-27 01:41:33.420486926 +0000 UTC m=+225.118319167 (delta=63.971836ms)
	I0927 01:41:33.528297   69534 fix.go:200] guest clock delta is within tolerance: 63.971836ms
	I0927 01:41:33.528303   69534 start.go:83] releasing machines lock for "default-k8s-diff-port-368295", held for 19.819799777s
	I0927 01:41:33.528328   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.528623   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:33.531282   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531692   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.531724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531914   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532476   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532651   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532742   69534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:33.532784   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.532868   69534 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:33.532890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.535432   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535710   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.535843   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.536153   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536195   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536351   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536367   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536513   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536508   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.536634   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536815   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.644679   69534 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:33.652386   69534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:33.803821   69534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:33.810620   69534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:33.810678   69534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:33.826938   69534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:33.826963   69534 start.go:495] detecting cgroup driver to use...
	I0927 01:41:33.827028   69534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:33.844572   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:33.859851   69534 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:33.859916   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:33.874262   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:33.888460   69534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:34.011008   69534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:34.161761   69534 docker.go:233] disabling docker service ...
	I0927 01:41:34.161855   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:34.180621   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:34.198472   69534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:34.340892   69534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:34.483708   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:34.498745   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:34.518957   69534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:34.519026   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.530123   69534 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:34.530172   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.545035   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.555944   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.566852   69534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:34.577676   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.589078   69534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.608131   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.619482   69534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:34.629119   69534 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:34.629180   69534 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:34.643997   69534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:34.656396   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:34.791856   69534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:34.884774   69534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:34.884831   69534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:34.889590   69534 start.go:563] Will wait 60s for crictl version
	I0927 01:41:34.889633   69534 ssh_runner.go:195] Run: which crictl
	I0927 01:41:34.893330   69534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:34.930031   69534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:34.930141   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.960912   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.996060   69534 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:31.542525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.546389   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:32.459727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:32.959527   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.459351   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.959903   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.459444   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.959423   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.459435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.460148   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.959874   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.997457   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:35.000691   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001081   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:35.001127   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001322   69534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:35.006115   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:35.019817   69534 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:35.019983   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:35.020045   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:35.062533   69534 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:35.062595   69534 ssh_runner.go:195] Run: which lz4
	I0927 01:41:35.066897   69534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:35.071178   69534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:35.071216   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:41:36.563774   69534 crio.go:462] duration metric: took 1.496913722s to copy over tarball
	I0927 01:41:36.563866   69534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:35.100818   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.101327   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.101354   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.101290   70637 retry.go:31] will retry after 244.193758ms: waiting for machine to come up
	I0927 01:41:35.347021   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.347674   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.347714   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.347650   70637 retry.go:31] will retry after 361.672884ms: waiting for machine to come up
	I0927 01:41:35.711206   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.711755   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.711788   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.711730   70637 retry.go:31] will retry after 406.084841ms: waiting for machine to come up
	I0927 01:41:36.119494   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.120026   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.120067   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.119978   70637 retry.go:31] will retry after 497.966133ms: waiting for machine to come up
	I0927 01:41:36.619859   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.620400   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.620428   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.620362   70637 retry.go:31] will retry after 765.975603ms: waiting for machine to come up
	I0927 01:41:37.387821   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:37.388502   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:37.388537   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:37.388453   70637 retry.go:31] will retry after 828.567445ms: waiting for machine to come up
	I0927 01:41:38.218462   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:38.218940   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:38.218974   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:38.218803   70637 retry.go:31] will retry after 1.269155563s: waiting for machine to come up
	I0927 01:41:39.489076   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:39.489557   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:39.489583   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:39.489514   70637 retry.go:31] will retry after 1.666481574s: waiting for machine to come up
	I0927 01:41:35.554859   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:38.043285   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:40.542499   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:37.459766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:37.959594   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.459971   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.960093   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.459983   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.959812   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.460220   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.959253   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.459829   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.959864   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.667451   69534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10354947s)
	I0927 01:41:38.667477   69534 crio.go:469] duration metric: took 2.103669113s to extract the tarball
	I0927 01:41:38.667487   69534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:38.704217   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:38.747162   69534 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:41:38.747187   69534 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:41:38.747197   69534 kubeadm.go:934] updating node { 192.168.61.83 8444 v1.31.1 crio true true} ...
	I0927 01:41:38.747323   69534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-368295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:38.747406   69534 ssh_runner.go:195] Run: crio config
	I0927 01:41:38.796481   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:38.796510   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:38.796522   69534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:38.796549   69534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-368295 NodeName:default-k8s-diff-port-368295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:41:38.796726   69534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-368295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:38.796806   69534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:41:38.807445   69534 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:38.807513   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:38.817368   69534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0927 01:41:38.834181   69534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:38.851650   69534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0927 01:41:38.869822   69534 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:38.873868   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:38.886422   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:39.022075   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:39.038948   69534 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295 for IP: 192.168.61.83
	I0927 01:41:39.038982   69534 certs.go:194] generating shared ca certs ...
	I0927 01:41:39.039004   69534 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:39.039174   69534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:39.039241   69534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:39.039253   69534 certs.go:256] generating profile certs ...
	I0927 01:41:39.039402   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.key
	I0927 01:41:39.039490   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key.2edc0267
	I0927 01:41:39.039549   69534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key
	I0927 01:41:39.039701   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:39.039773   69534 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:39.039789   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:39.039825   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:39.039860   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:39.039889   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:39.039950   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:39.040814   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:39.080130   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:39.133365   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:39.169238   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:39.196619   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 01:41:39.227667   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:39.255240   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:39.280602   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:41:39.305695   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:39.329559   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:39.358555   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:39.387030   69534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:39.404111   69534 ssh_runner.go:195] Run: openssl version
	I0927 01:41:39.409879   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:39.420542   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425094   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425151   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.431225   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:39.442237   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:39.453229   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458040   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458110   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.464177   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:39.475582   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:39.486911   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491843   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491898   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.497653   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:39.508039   69534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:39.512597   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:39.518557   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:39.524475   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:39.530616   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:39.536820   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:39.543487   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:39.549791   69534 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:39.549880   69534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:39.549945   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.594178   69534 cri.go:89] found id: ""
	I0927 01:41:39.594256   69534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:39.605173   69534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:39.605195   69534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:39.605261   69534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:39.615543   69534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:39.616639   69534 kubeconfig.go:125] found "default-k8s-diff-port-368295" server: "https://192.168.61.83:8444"
	I0927 01:41:39.618793   69534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:39.628422   69534 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.83
	I0927 01:41:39.628454   69534 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:39.628465   69534 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:39.628566   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.673513   69534 cri.go:89] found id: ""
	I0927 01:41:39.673592   69534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:39.690296   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:39.699800   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:39.699821   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:39.699876   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:41:39.709235   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:39.709294   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:39.719012   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:41:39.728197   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:39.728262   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:39.737520   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.746592   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:39.746653   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.756251   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:41:39.765026   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:39.765090   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:39.774937   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:39.784588   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:39.893259   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.625162   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.954926   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.025693   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.101915   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:41.102006   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.602856   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.102942   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.602371   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.620056   69534 api_server.go:72] duration metric: took 1.518136259s to wait for apiserver process to appear ...
	I0927 01:41:42.620085   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:42.620107   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:41.157254   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:41.157789   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:41.157817   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:41.157738   70637 retry.go:31] will retry after 1.495421187s: waiting for machine to come up
	I0927 01:41:42.655326   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:42.655826   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:42.655853   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:42.655771   70637 retry.go:31] will retry after 2.80191937s: waiting for machine to come up
	I0927 01:41:42.543732   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.043009   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.040496   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.040525   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.040542   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.079569   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.079602   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.120702   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.126461   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.126488   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.621130   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.629533   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:45.629569   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.121189   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.130806   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:46.130842   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.620334   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.625456   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:41:46.636549   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:46.636581   69534 api_server.go:131] duration metric: took 4.016488114s to wait for apiserver health ...
	I0927 01:41:46.636591   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:46.636599   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:46.638016   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:42.459806   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.960200   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.459511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.959467   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.459352   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.960147   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.459637   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.959535   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.459585   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.959579   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.639222   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:46.651680   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:46.671366   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:46.684702   69534 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:46.684740   69534 system_pods.go:61] "coredns-7c65d6cfc9-xtgdx" [6a5f97bd-0fbb-4220-a763-bb8ca6fab439] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:46.684752   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [2dbd4866-89f2-4a0c-ab8a-671ff0237bf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:46.684761   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [62865280-e996-45a9-a872-766e09d5b91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:46.684774   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [b0d06bec-2f5a-46e4-9d2d-b2ea7cdc7968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:46.684781   69534 system_pods.go:61] "kube-proxy-xm2p8" [449495d5-a476-4abf-b6be-301b9ead92e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 01:41:46.684793   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [71dadb93-c535-4ce3-8dd7-ffd4496bf0e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:46.684801   69534 system_pods.go:61] "metrics-server-6867b74b74-n9nsg" [fefb6977-44af-41f8-8a82-1dcd76374ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:46.684811   69534 system_pods.go:61] "storage-provisioner" [78bd924c-1d70-4eb6-9e2c-0e21ebc523dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 01:41:46.684818   69534 system_pods.go:74] duration metric: took 13.431978ms to wait for pod list to return data ...
	I0927 01:41:46.684830   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:46.690309   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:46.690343   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:46.690358   69534 node_conditions.go:105] duration metric: took 5.522911ms to run NodePressure ...
	I0927 01:41:46.690379   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:46.964511   69534 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971731   69534 kubeadm.go:739] kubelet initialised
	I0927 01:41:46.971751   69534 kubeadm.go:740] duration metric: took 7.215476ms waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971760   69534 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:46.978192   69534 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:45.459706   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:45.460242   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:45.460265   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:45.460161   70637 retry.go:31] will retry after 3.051133432s: waiting for machine to come up
	I0927 01:41:48.512758   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:48.513180   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:48.513208   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:48.513118   70637 retry.go:31] will retry after 3.478053984s: waiting for machine to come up
	I0927 01:41:47.544064   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:50.042360   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.459645   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:47.959756   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.460088   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.959526   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.960102   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.460203   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.960225   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.460182   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.959343   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.985840   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:51.506449   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.484646   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:52.484672   69534 pod_ready.go:82] duration metric: took 5.506454681s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:52.484685   69534 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:51.994746   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995201   68676 main.go:141] libmachine: (no-preload-521072) Found IP for machine: 192.168.50.246
	I0927 01:41:51.995219   68676 main.go:141] libmachine: (no-preload-521072) Reserving static IP address...
	I0927 01:41:51.995230   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has current primary IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.995677   68676 main.go:141] libmachine: (no-preload-521072) Reserved static IP address: 192.168.50.246
	I0927 01:41:51.995695   68676 main.go:141] libmachine: (no-preload-521072) DBG | skip adding static IP to network mk-no-preload-521072 - found existing host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"}
	I0927 01:41:51.995713   68676 main.go:141] libmachine: (no-preload-521072) DBG | Getting to WaitForSSH function...
	I0927 01:41:51.995727   68676 main.go:141] libmachine: (no-preload-521072) Waiting for SSH to be available...
	I0927 01:41:51.998245   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998590   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.998616   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998748   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH client type: external
	I0927 01:41:51.998810   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa (-rw-------)
	I0927 01:41:51.998850   68676 main.go:141] libmachine: (no-preload-521072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:51.998866   68676 main.go:141] libmachine: (no-preload-521072) DBG | About to run SSH command:
	I0927 01:41:51.998877   68676 main.go:141] libmachine: (no-preload-521072) DBG | exit 0
	I0927 01:41:52.131754   68676 main.go:141] libmachine: (no-preload-521072) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:52.132117   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetConfigRaw
	I0927 01:41:52.132724   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.135236   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135588   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.135615   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135866   68676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/config.json ...
	I0927 01:41:52.136059   68676 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:52.136078   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:52.136300   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.138644   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139009   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.139035   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139215   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.139406   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139602   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139760   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.139931   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.140139   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.140151   68676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:52.255655   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:52.255690   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.255952   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:41:52.255968   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.256122   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.258599   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.258963   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.258994   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.259108   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.259322   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259494   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259676   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.259835   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.260008   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.260023   68676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-521072 && echo "no-preload-521072" | sudo tee /etc/hostname
	I0927 01:41:52.405255   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-521072
	
	I0927 01:41:52.405314   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.408593   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.408927   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.408973   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.409346   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.409591   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409786   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409940   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.410094   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.410331   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.410356   68676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521072/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:52.538244   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:52.538276   68676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:52.538321   68676 buildroot.go:174] setting up certificates
	I0927 01:41:52.538335   68676 provision.go:84] configureAuth start
	I0927 01:41:52.538350   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.538644   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.541913   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542334   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.542372   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542540   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.544773   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545127   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.545163   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545357   68676 provision.go:143] copyHostCerts
	I0927 01:41:52.545415   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:52.545427   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:52.545496   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:52.545614   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:52.545624   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:52.545655   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:52.545732   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:52.545742   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:52.545768   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:52.545834   68676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.no-preload-521072 san=[127.0.0.1 192.168.50.246 localhost minikube no-preload-521072]
	I0927 01:41:52.738375   68676 provision.go:177] copyRemoteCerts
	I0927 01:41:52.738434   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:52.738459   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.741146   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741439   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.741456   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741630   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.741828   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.741961   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.742086   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:52.830330   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:52.854664   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:41:52.879246   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:52.902734   68676 provision.go:87] duration metric: took 364.385528ms to configureAuth
	I0927 01:41:52.902782   68676 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:52.903017   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:52.903109   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.906143   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906495   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.906526   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906699   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.906917   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907086   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907211   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.907426   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.907625   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.907640   68676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:53.162936   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:53.162960   68676 machine.go:96] duration metric: took 1.026891152s to provisionDockerMachine
	I0927 01:41:53.162971   68676 start.go:293] postStartSetup for "no-preload-521072" (driver="kvm2")
	I0927 01:41:53.162980   68676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:53.162994   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.163325   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:53.163360   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.166007   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166478   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.166516   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166726   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.166919   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.167103   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.167253   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.254620   68676 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:53.259139   68676 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:53.259160   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:53.259236   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:53.259341   68676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:53.259465   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:53.269711   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:53.294563   68676 start.go:296] duration metric: took 131.58032ms for postStartSetup
	I0927 01:41:53.294602   68676 fix.go:56] duration metric: took 19.766156729s for fixHost
	I0927 01:41:53.294626   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.297597   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.297897   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.297928   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.298092   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.298275   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298632   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.298821   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:53.298997   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:53.299010   68676 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:53.416459   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401313.370238189
	
	I0927 01:41:53.416488   68676 fix.go:216] guest clock: 1727401313.370238189
	I0927 01:41:53.416497   68676 fix.go:229] Guest: 2024-09-27 01:41:53.370238189 +0000 UTC Remote: 2024-09-27 01:41:53.294607439 +0000 UTC m=+358.400757430 (delta=75.63075ms)
	I0927 01:41:53.416521   68676 fix.go:200] guest clock delta is within tolerance: 75.63075ms
	I0927 01:41:53.416542   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 19.888127741s
	I0927 01:41:53.416581   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.416835   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:53.419800   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420124   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.420153   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420309   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420730   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420905   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420988   68676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:53.421036   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.421126   68676 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:53.421148   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.423529   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423882   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423916   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.423937   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424023   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424308   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424365   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.424412   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424464   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.424567   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424701   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424838   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424990   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.527586   68676 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:53.533685   68676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:53.680850   68676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:53.686769   68676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:53.686831   68676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:53.702686   68676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:53.702709   68676 start.go:495] detecting cgroup driver to use...
	I0927 01:41:53.702787   68676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:53.720756   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:53.736843   68676 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:53.736920   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:53.752063   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:53.768140   68676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:53.890040   68676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:54.044033   68676 docker.go:233] disabling docker service ...
	I0927 01:41:54.044100   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:54.060061   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:54.073201   68676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:54.225559   68676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:54.367269   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:54.381517   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:54.401099   68676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:54.401164   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.412620   68676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:54.412687   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.425942   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.437451   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.449115   68676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:54.460383   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.471393   68676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.489649   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.500699   68676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:54.511012   68676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:54.511061   68676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:54.524738   68676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:54.535353   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:54.672416   68676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:54.763423   68676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:54.763506   68676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:54.768758   68676 start.go:563] Will wait 60s for crictl version
	I0927 01:41:54.768823   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:54.772980   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:54.814375   68676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:54.814460   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.844002   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.876692   68676 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:54.877765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:54.880320   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.880817   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:54.880852   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.881008   68676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:54.885225   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:54.897661   68676 kubeadm.go:883] updating cluster {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:54.897768   68676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:54.897810   68676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:52.542326   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.543472   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.459589   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:52.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.459448   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.960120   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.460016   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.959681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.959819   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.459221   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.959296   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.491390   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.997932   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.937979   68676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:54.938000   68676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:54.938055   68676 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.938103   68676 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.938124   68676 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.938101   68676 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.938180   68676 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:54.938069   68676 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939611   68676 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.939853   68676 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.939867   68676 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.939872   68676 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.939875   68676 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.939868   68676 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939932   68676 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0927 01:41:54.939954   68676 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.100149   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.104432   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.122220   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0927 01:41:55.146745   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.148808   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.159749   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.194662   68676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0927 01:41:55.194710   68676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.194764   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.218262   68676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0927 01:41:55.218302   68676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.218348   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.275530   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339428   68676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0927 01:41:55.339476   68676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.339488   68676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0927 01:41:55.339526   68676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.339554   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339558   68676 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0927 01:41:55.339569   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339573   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.339584   68676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.339619   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339625   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.339689   68676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0927 01:41:55.339733   68676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339772   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.392986   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.393033   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.403596   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.403658   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.403601   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.404180   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.528983   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.529008   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.529013   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.556122   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.556146   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.559222   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.668914   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0927 01:41:55.669041   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.671951   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.672026   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.675810   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0927 01:41:55.675854   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.675883   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.675910   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:55.687199   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0927 01:41:55.687234   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.687294   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.766777   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0927 01:41:55.766775   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0927 01:41:55.766894   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:41:55.766901   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:41:55.776811   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0927 01:41:55.776824   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0927 01:41:55.776933   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0927 01:41:55.777033   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:55.776938   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:41:56.125882   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825382   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.048325373s)
	I0927 01:41:57.825460   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0927 01:41:57.825396   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.048309349s)
	I0927 01:41:57.825483   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0927 01:41:57.825401   68676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.699485021s)
	I0927 01:41:57.825517   68676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0927 01:41:57.825520   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.138185505s)
	I0927 01:41:57.825540   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0927 01:41:57.825548   68676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825411   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.058505151s)
	I0927 01:41:57.825566   68676 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:57.825573   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0927 01:41:57.825414   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.058497946s)
	I0927 01:41:57.825584   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0927 01:41:57.825596   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:57.825613   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:59.788391   68676 ssh_runner.go:235] Completed: which crictl: (1.962775321s)
	I0927 01:41:59.788412   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.962779963s)
	I0927 01:41:59.788429   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0927 01:41:59.788457   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:59.788462   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:59.788499   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:57.043267   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.542589   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:57.459172   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:57.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.459323   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.960219   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.459916   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.959858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.460249   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.959246   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.459839   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.959224   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.490443   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.992727   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.992753   69534 pod_ready.go:82] duration metric: took 7.508057707s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.992766   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998326   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.998357   69534 pod_ready.go:82] duration metric: took 5.584215ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998372   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003176   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.003197   69534 pod_ready.go:82] duration metric: took 4.816939ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003209   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009089   69534 pod_ready.go:93] pod "kube-proxy-xm2p8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.009110   69534 pod_ready.go:82] duration metric: took 5.893939ms for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009119   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014172   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.014197   69534 pod_ready.go:82] duration metric: took 5.072107ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014209   69534 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:02.021372   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.758278   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969794291s)
	I0927 01:42:01.758369   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:01.758392   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.969869427s)
	I0927 01:42:01.758415   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0927 01:42:01.758445   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.758494   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.796910   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:03.934871   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.176354046s)
	I0927 01:42:03.934903   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0927 01:42:03.934921   68676 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.934927   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.137986898s)
	I0927 01:42:03.934972   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 01:42:03.934994   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.935050   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:03.939942   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0927 01:42:02.042617   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:04.042848   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:02.460232   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:02.959635   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.459610   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.959412   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.459857   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.959495   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.459972   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.959931   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.459460   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.959627   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.021759   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:06.521921   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.308972   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373952677s)
	I0927 01:42:07.308999   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0927 01:42:07.309024   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:07.309070   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:09.378517   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.06942074s)
	I0927 01:42:09.378550   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0927 01:42:09.378579   68676 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:09.378629   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:06.546731   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:09.044481   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.459395   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:07.959574   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.460234   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.959281   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.459240   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.959429   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.459865   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.959431   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.459459   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.020456   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:11.021689   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:10.030049   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 01:42:10.030100   68676 cache_images.go:123] Successfully loaded all cached images
	I0927 01:42:10.030106   68676 cache_images.go:92] duration metric: took 15.09209404s to LoadCachedImages
	I0927 01:42:10.030118   68676 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.31.1 crio true true} ...
	I0927 01:42:10.030211   68676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-521072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:42:10.030273   68676 ssh_runner.go:195] Run: crio config
	I0927 01:42:10.078318   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:10.078342   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:10.078351   68676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:42:10.078370   68676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521072 NodeName:no-preload-521072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:42:10.078506   68676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521072"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:42:10.078580   68676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:42:10.089137   68676 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:42:10.089212   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:42:10.098310   68676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 01:42:10.116172   68676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:42:10.134642   68676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0927 01:42:10.152442   68676 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0927 01:42:10.156477   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:42:10.169007   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:10.288382   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:10.306047   68676 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072 for IP: 192.168.50.246
	I0927 01:42:10.306077   68676 certs.go:194] generating shared ca certs ...
	I0927 01:42:10.306096   68676 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:10.306276   68676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:42:10.306331   68676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:42:10.306350   68676 certs.go:256] generating profile certs ...
	I0927 01:42:10.306453   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/client.key
	I0927 01:42:10.306553   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key.735097eb
	I0927 01:42:10.306613   68676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key
	I0927 01:42:10.306761   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:42:10.306797   68676 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:42:10.306808   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:42:10.306833   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:42:10.306854   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:42:10.306878   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:42:10.306916   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:42:10.307598   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:42:10.344570   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:42:10.386834   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:42:10.432022   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:42:10.462348   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:42:10.490015   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:42:10.518144   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:42:10.545290   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:42:10.572460   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:42:10.597526   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:42:10.622287   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:42:10.646020   68676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:42:10.662972   68676 ssh_runner.go:195] Run: openssl version
	I0927 01:42:10.668844   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:42:10.680020   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684620   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684678   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.690694   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:42:10.702115   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:42:10.713424   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717918   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717971   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.723601   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:42:10.734870   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:42:10.747370   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752016   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752072   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.757964   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:42:10.769560   68676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:42:10.774457   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:42:10.780719   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:42:10.786653   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:42:10.792671   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:42:10.798674   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:42:10.804910   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:42:10.811007   68676 kubeadm.go:392] StartCluster: {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:42:10.811114   68676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:42:10.811178   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.851017   68676 cri.go:89] found id: ""
	I0927 01:42:10.851084   68676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:42:10.864997   68676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:42:10.865016   68676 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:42:10.865062   68676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:42:10.877088   68676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:42:10.878133   68676 kubeconfig.go:125] found "no-preload-521072" server: "https://192.168.50.246:8443"
	I0927 01:42:10.880637   68676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:42:10.893554   68676 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.246
	I0927 01:42:10.893578   68676 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:42:10.893592   68676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:42:10.893629   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.935734   68676 cri.go:89] found id: ""
	I0927 01:42:10.935794   68676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:42:10.954141   68676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:42:10.965345   68676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:42:10.965363   68676 kubeadm.go:157] found existing configuration files:
	
	I0927 01:42:10.965413   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:42:10.975561   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:42:10.975628   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:42:10.985747   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:42:10.995026   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:42:10.995089   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:42:11.006650   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.016964   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:42:11.017034   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.028756   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:42:11.039002   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:42:11.039072   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:42:11.050382   68676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:42:11.060839   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:11.177447   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.481118   68676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.303633907s)
	I0927 01:42:12.481149   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.706344   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.774938   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.866467   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:42:12.866552   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.366860   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.866951   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.882411   68676 api_server.go:72] duration metric: took 1.015943274s to wait for apiserver process to appear ...
	I0927 01:42:13.882435   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:42:13.882457   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:13.882963   68676 api_server.go:269] stopped: https://192.168.50.246:8443/healthz: Get "https://192.168.50.246:8443/healthz": dial tcp 192.168.50.246:8443: connect: connection refused
	I0927 01:42:14.382489   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:11.543818   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:14.042536   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:12.459771   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:12.959727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.459428   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.959255   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.460003   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.959853   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.460237   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.959974   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.459420   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.959321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.527793   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:16.023080   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.124839   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:42:17.124867   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:42:17.124885   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.174869   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.174905   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.383128   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.389594   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.389629   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.883197   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.888706   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.888734   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.382982   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.387847   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.387877   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.882844   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.887144   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.887178   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.382711   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.388007   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.388037   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.882613   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.886781   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.886801   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:20.382907   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:20.387083   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:42:20.393697   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:42:20.393725   68676 api_server.go:131] duration metric: took 6.511280572s to wait for apiserver health ...
	I0927 01:42:20.393735   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:20.393743   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:20.395270   68676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:42:16.543525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:19.041726   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:20.396770   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:42:20.407891   68676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:42:20.427815   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:42:20.436940   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:42:20.436980   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:42:20.436989   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:42:20.436997   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:42:20.437005   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:42:20.437012   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:42:20.437020   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:42:20.437032   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:42:20.437038   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:42:20.437049   68676 system_pods.go:74] duration metric: took 9.213874ms to wait for pod list to return data ...
	I0927 01:42:20.437057   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:42:20.440323   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:42:20.440345   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:42:20.440356   68676 node_conditions.go:105] duration metric: took 3.294768ms to run NodePressure ...
	I0927 01:42:20.440372   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:20.710186   68676 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713940   68676 kubeadm.go:739] kubelet initialised
	I0927 01:42:20.713958   68676 kubeadm.go:740] duration metric: took 3.749241ms waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713965   68676 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:20.718807   68676 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.722955   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722976   68676 pod_ready.go:82] duration metric: took 4.147896ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.722984   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722991   68676 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.727569   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727596   68676 pod_ready.go:82] duration metric: took 4.598426ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.727604   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727611   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.731845   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731871   68676 pod_ready.go:82] duration metric: took 4.25326ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.731881   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731889   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.830881   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830909   68676 pod_ready.go:82] duration metric: took 99.009569ms for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.830918   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830923   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.232434   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232463   68676 pod_ready.go:82] duration metric: took 401.530413ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.232473   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232485   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.630791   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630818   68676 pod_ready.go:82] duration metric: took 398.325039ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.630829   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630836   68676 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:22.032173   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032200   68676 pod_ready.go:82] duration metric: took 401.353533ms for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:22.032208   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032215   68676 pod_ready.go:39] duration metric: took 1.318241972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:22.032233   68676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:42:22.046872   68676 ops.go:34] apiserver oom_adj: -16
	I0927 01:42:22.046898   68676 kubeadm.go:597] duration metric: took 11.181875532s to restartPrimaryControlPlane
	I0927 01:42:22.046908   68676 kubeadm.go:394] duration metric: took 11.235909243s to StartCluster
	I0927 01:42:22.046923   68676 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.046984   68676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:42:22.048611   68676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.048864   68676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:42:22.048932   68676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:42:22.049029   68676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-521072"
	I0927 01:42:22.049050   68676 addons.go:234] Setting addon storage-provisioner=true in "no-preload-521072"
	W0927 01:42:22.049060   68676 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:42:22.049066   68676 addons.go:69] Setting default-storageclass=true in profile "no-preload-521072"
	I0927 01:42:22.049088   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049092   68676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521072"
	I0927 01:42:22.049096   68676 addons.go:69] Setting metrics-server=true in profile "no-preload-521072"
	I0927 01:42:22.049117   68676 addons.go:234] Setting addon metrics-server=true in "no-preload-521072"
	I0927 01:42:22.049123   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0927 01:42:22.049134   68676 addons.go:243] addon metrics-server should already be in state true
	I0927 01:42:22.049167   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049423   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049455   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049478   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049507   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049535   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049555   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.050564   68676 out.go:177] * Verifying Kubernetes components...
	I0927 01:42:22.051717   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:22.088020   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0927 01:42:22.088454   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.088964   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.088985   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.089333   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.089793   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.089825   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.091735   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I0927 01:42:22.091853   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0927 01:42:22.092236   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092295   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092659   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092677   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.092817   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092840   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.093170   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093344   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093387   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.093922   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.093949   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.097310   68676 addons.go:234] Setting addon default-storageclass=true in "no-preload-521072"
	W0927 01:42:22.097333   68676 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:42:22.097368   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.097705   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.097747   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.110628   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0927 01:42:22.111053   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.111604   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.111629   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.112113   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.112329   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.113354   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0927 01:42:22.114009   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.114749   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.115666   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.115690   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.116105   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.116374   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.116862   68676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:42:22.118124   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.118135   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:42:22.118162   68676 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:42:22.118180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.119866   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0927 01:42:22.120317   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.120908   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.120931   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.121113   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.121319   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.121556   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.121576   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.122025   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.122051   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.122280   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.122487   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.122652   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.122781   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.126076   68676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:17.459443   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:17.959426   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.460250   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.959989   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.459981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.959969   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.459758   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.959440   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.460115   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.959238   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.521751   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:21.020226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.021393   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.127430   68676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.127446   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:42:22.127460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.130498   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131040   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.131061   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131357   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.131544   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.131670   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.131997   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.138657   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0927 01:42:22.138987   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.139420   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.139438   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.139824   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.139998   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.141454   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.141664   68676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.141673   68676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:42:22.141683   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.144221   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.144670   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.144931   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.145071   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.145208   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.244289   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:22.261345   68676 node_ready.go:35] waiting up to 6m0s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:22.365923   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:42:22.365953   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:42:22.387392   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.389353   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:42:22.389379   68676 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:42:22.406994   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.491559   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:22.491581   68676 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:42:22.586476   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:23.660676   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273241029s)
	I0927 01:42:23.660733   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660750   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660732   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.253706672s)
	I0927 01:42:23.660831   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660841   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660851   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.074315804s)
	I0927 01:42:23.661081   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661098   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661109   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661108   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661118   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661153   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661205   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661161   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661223   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661230   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661238   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661125   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661607   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661608   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661621   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661631   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661632   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661637   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661641   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661645   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661649   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661650   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661653   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661852   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661866   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661874   68676 addons.go:475] Verifying addon metrics-server=true in "no-preload-521072"
	I0927 01:42:23.661917   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.668484   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.668499   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.668711   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.668726   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.668743   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.670758   68676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0927 01:42:23.672072   68676 addons.go:510] duration metric: took 1.62313879s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0927 01:42:24.265426   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.043831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:25.546335   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.460161   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:22.959177   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.459481   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.959221   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:23.959322   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:24.004970   69333 cri.go:89] found id: ""
	I0927 01:42:24.004999   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.005010   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:24.005017   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:24.005076   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:24.041880   69333 cri.go:89] found id: ""
	I0927 01:42:24.041908   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.041919   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:24.041926   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:24.041991   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:24.082295   69333 cri.go:89] found id: ""
	I0927 01:42:24.082318   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.082325   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:24.082331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:24.082385   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:24.119663   69333 cri.go:89] found id: ""
	I0927 01:42:24.119692   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.119707   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:24.119714   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:24.119771   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:24.163893   69333 cri.go:89] found id: ""
	I0927 01:42:24.163920   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.163932   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:24.163940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:24.163999   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:24.200277   69333 cri.go:89] found id: ""
	I0927 01:42:24.200299   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.200307   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:24.200312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:24.200365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:24.235039   69333 cri.go:89] found id: ""
	I0927 01:42:24.235059   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.235066   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:24.235072   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:24.235132   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:24.275160   69333 cri.go:89] found id: ""
	I0927 01:42:24.275181   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.275188   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:24.275196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:24.275206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:24.327432   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:24.327465   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:24.341113   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:24.341139   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:24.473741   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:24.473764   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:24.473779   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:24.545888   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:24.545923   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:27.086673   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:27.100552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:27.100623   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:27.136182   69333 cri.go:89] found id: ""
	I0927 01:42:27.136207   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.136215   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:27.136221   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:27.136289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:27.173258   69333 cri.go:89] found id: ""
	I0927 01:42:27.173285   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.173296   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:27.173303   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:27.173373   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:27.210481   69333 cri.go:89] found id: ""
	I0927 01:42:27.210514   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.210526   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:27.210533   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:27.210586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:27.245168   69333 cri.go:89] found id: ""
	I0927 01:42:27.245192   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.245200   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:27.245206   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:27.245252   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:27.280494   69333 cri.go:89] found id: ""
	I0927 01:42:27.280522   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.280531   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:27.280538   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:27.280596   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:27.314281   69333 cri.go:89] found id: ""
	I0927 01:42:27.314307   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.314316   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:27.314322   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:27.314392   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:25.521413   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.019989   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:26.764721   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:27.765574   68676 node_ready.go:49] node "no-preload-521072" has status "Ready":"True"
	I0927 01:42:27.765597   68676 node_ready.go:38] duration metric: took 5.504217374s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:27.765609   68676 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:27.772263   68676 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777521   68676 pod_ready.go:93] pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.777544   68676 pod_ready.go:82] duration metric: took 5.252259ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777552   68676 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781511   68676 pod_ready.go:93] pod "etcd-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.781528   68676 pod_ready.go:82] duration metric: took 3.970559ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781535   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785556   68676 pod_ready.go:93] pod "kube-apiserver-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.785572   68676 pod_ready.go:82] duration metric: took 4.032023ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785579   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:29.792899   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.041166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:30.041766   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:27.350838   69333 cri.go:89] found id: ""
	I0927 01:42:27.350861   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.350869   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:27.350874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:27.350921   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:27.390146   69333 cri.go:89] found id: ""
	I0927 01:42:27.390175   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.390186   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:27.390196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:27.390206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:27.446727   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:27.446756   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:27.461337   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:27.461365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:27.533818   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:27.533839   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:27.533874   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:27.614325   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:27.614357   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.161303   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:30.179521   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:30.179590   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:30.221738   69333 cri.go:89] found id: ""
	I0927 01:42:30.221764   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.221772   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:30.221778   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:30.221841   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:30.258316   69333 cri.go:89] found id: ""
	I0927 01:42:30.258349   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.258359   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:30.258369   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:30.258427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:30.297079   69333 cri.go:89] found id: ""
	I0927 01:42:30.297102   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.297109   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:30.297114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:30.297159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:30.337969   69333 cri.go:89] found id: ""
	I0927 01:42:30.337995   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.338007   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:30.338014   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:30.338075   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:30.375946   69333 cri.go:89] found id: ""
	I0927 01:42:30.375975   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.375986   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:30.375993   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:30.376054   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:30.411673   69333 cri.go:89] found id: ""
	I0927 01:42:30.411700   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.411710   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:30.411718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:30.411765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:30.447784   69333 cri.go:89] found id: ""
	I0927 01:42:30.447812   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.447822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:30.447830   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:30.447890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:30.483164   69333 cri.go:89] found id: ""
	I0927 01:42:30.483191   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.483202   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:30.483213   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:30.483229   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:30.533490   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:30.533522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:30.547688   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:30.547722   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:30.626696   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:30.626720   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:30.626733   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:30.708767   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:30.708809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.020786   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.021243   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.292370   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.791420   68676 pod_ready.go:93] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.791444   68676 pod_ready.go:82] duration metric: took 5.00585892s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.791454   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796509   68676 pod_ready.go:93] pod "kube-proxy-wkcb8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.796528   68676 pod_ready.go:82] duration metric: took 5.067798ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796536   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801041   68676 pod_ready.go:93] pod "kube-scheduler-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.801066   68676 pod_ready.go:82] duration metric: took 4.523416ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801087   68676 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:34.807359   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.042216   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:34.541390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:33.250034   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:33.263733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:33.263805   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:33.298038   69333 cri.go:89] found id: ""
	I0927 01:42:33.298063   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.298071   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:33.298077   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:33.298139   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:33.338027   69333 cri.go:89] found id: ""
	I0927 01:42:33.338050   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.338058   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:33.338064   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:33.338118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:33.376470   69333 cri.go:89] found id: ""
	I0927 01:42:33.376496   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.376504   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:33.376509   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:33.376568   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:33.419831   69333 cri.go:89] found id: ""
	I0927 01:42:33.419859   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.419868   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:33.419874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:33.419929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:33.461029   69333 cri.go:89] found id: ""
	I0927 01:42:33.461057   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.461076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:33.461085   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:33.461158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:33.499968   69333 cri.go:89] found id: ""
	I0927 01:42:33.499996   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.500007   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:33.500015   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:33.500073   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:33.552601   69333 cri.go:89] found id: ""
	I0927 01:42:33.552625   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.552633   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:33.552640   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:33.552702   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:33.589491   69333 cri.go:89] found id: ""
	I0927 01:42:33.589520   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.589529   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:33.589540   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:33.589554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:33.643437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:33.643470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:33.657819   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:33.657846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:33.728369   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:33.728393   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:33.728407   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:33.803661   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:33.803691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:36.343598   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:36.357879   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:36.357937   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:36.398936   69333 cri.go:89] found id: ""
	I0927 01:42:36.398958   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.398966   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:36.398971   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:36.399016   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:36.438897   69333 cri.go:89] found id: ""
	I0927 01:42:36.438921   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.438928   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:36.438935   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:36.438979   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:36.476779   69333 cri.go:89] found id: ""
	I0927 01:42:36.476807   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.476817   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:36.476824   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:36.476882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:36.514216   69333 cri.go:89] found id: ""
	I0927 01:42:36.514238   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.514245   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:36.514251   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:36.514306   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:36.551800   69333 cri.go:89] found id: ""
	I0927 01:42:36.551827   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.551835   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:36.551841   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:36.551900   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:36.592060   69333 cri.go:89] found id: ""
	I0927 01:42:36.592086   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.592096   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:36.592101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:36.592172   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:36.633485   69333 cri.go:89] found id: ""
	I0927 01:42:36.633507   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.633514   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:36.633519   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:36.633571   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:36.667288   69333 cri.go:89] found id: ""
	I0927 01:42:36.667355   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.667366   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:36.667377   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:36.667391   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:36.722230   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:36.722263   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:36.735927   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:36.735952   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:36.808852   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:36.808872   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:36.808887   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:36.889259   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:36.889299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:34.520143   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.521254   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.808388   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.308743   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.542085   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.042119   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.438818   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:39.459082   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:39.459150   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:39.499966   69333 cri.go:89] found id: ""
	I0927 01:42:39.499991   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.499999   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:39.500004   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:39.500050   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:39.540828   69333 cri.go:89] found id: ""
	I0927 01:42:39.540850   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.540857   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:39.540864   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:39.540972   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:39.575841   69333 cri.go:89] found id: ""
	I0927 01:42:39.575868   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.575879   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:39.575886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:39.575958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:39.611105   69333 cri.go:89] found id: ""
	I0927 01:42:39.611184   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.611202   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:39.611212   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:39.611268   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:39.644772   69333 cri.go:89] found id: ""
	I0927 01:42:39.644800   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.644808   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:39.644813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:39.644868   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:39.679875   69333 cri.go:89] found id: ""
	I0927 01:42:39.679901   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.679912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:39.679919   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:39.679987   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:39.716410   69333 cri.go:89] found id: ""
	I0927 01:42:39.716440   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.716450   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:39.716457   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:39.716525   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:39.750391   69333 cri.go:89] found id: ""
	I0927 01:42:39.750418   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.750428   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:39.750439   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:39.750455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:39.822365   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:39.822401   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:39.822416   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:39.905982   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:39.906017   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:39.952310   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:39.952339   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:40.000523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:40.000554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:39.021945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.519787   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.807532   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:44.307548   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.042260   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:43.042762   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.542112   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:42.514379   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:42.528312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:42.528377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:42.562427   69333 cri.go:89] found id: ""
	I0927 01:42:42.562455   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.562463   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:42.562469   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:42.562526   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:42.599969   69333 cri.go:89] found id: ""
	I0927 01:42:42.599993   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.600002   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:42.600007   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:42.600053   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:42.636338   69333 cri.go:89] found id: ""
	I0927 01:42:42.636364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.636371   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:42.636376   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:42.636431   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:42.670781   69333 cri.go:89] found id: ""
	I0927 01:42:42.670809   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.670818   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:42.670823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:42.670880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:42.707334   69333 cri.go:89] found id: ""
	I0927 01:42:42.707364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.707375   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:42.707431   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:42.707503   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:42.743063   69333 cri.go:89] found id: ""
	I0927 01:42:42.743092   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.743103   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:42.743139   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:42.743192   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:42.778593   69333 cri.go:89] found id: ""
	I0927 01:42:42.778617   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.778628   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:42.778634   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:42.778700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:42.814261   69333 cri.go:89] found id: ""
	I0927 01:42:42.814286   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.814293   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:42.814300   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:42.814310   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:42.863982   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:42.864011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:42.877151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:42.877175   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:42.959233   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:42.959251   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:42.959262   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:43.038773   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:43.038805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:45.581272   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:45.596103   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:45.596167   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:45.639507   69333 cri.go:89] found id: ""
	I0927 01:42:45.639531   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.639539   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:45.639545   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:45.639611   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:45.678455   69333 cri.go:89] found id: ""
	I0927 01:42:45.678482   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.678489   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:45.678495   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:45.678539   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:45.722094   69333 cri.go:89] found id: ""
	I0927 01:42:45.722123   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.722135   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:45.722142   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:45.722211   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:45.758091   69333 cri.go:89] found id: ""
	I0927 01:42:45.758118   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.758127   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:45.758133   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:45.758183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:45.792976   69333 cri.go:89] found id: ""
	I0927 01:42:45.793010   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.793021   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:45.793028   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:45.793089   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:45.830235   69333 cri.go:89] found id: ""
	I0927 01:42:45.830262   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.830273   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:45.830280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:45.830324   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:45.865896   69333 cri.go:89] found id: ""
	I0927 01:42:45.865928   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.865938   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:45.865946   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:45.866000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:45.900058   69333 cri.go:89] found id: ""
	I0927 01:42:45.900088   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.900099   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:45.900108   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:45.900119   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:45.972986   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:45.973015   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:45.973030   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:46.048703   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:46.048732   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:46.087483   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:46.087515   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:46.136833   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:46.136866   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:43.520998   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.522532   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.020912   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:46.307637   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.808963   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.041757   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:50.042259   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.650738   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:48.665847   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:48.665930   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:48.704304   69333 cri.go:89] found id: ""
	I0927 01:42:48.704328   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.704337   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:48.704342   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:48.704402   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:48.742469   69333 cri.go:89] found id: ""
	I0927 01:42:48.742499   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.742510   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:48.742517   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:48.742579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:48.782154   69333 cri.go:89] found id: ""
	I0927 01:42:48.782183   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.782194   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:48.782201   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:48.782261   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:48.821686   69333 cri.go:89] found id: ""
	I0927 01:42:48.821709   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.821717   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:48.821723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:48.821781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:48.867072   69333 cri.go:89] found id: ""
	I0927 01:42:48.867099   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.867109   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:48.867123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:48.867191   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:48.908215   69333 cri.go:89] found id: ""
	I0927 01:42:48.908241   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.908249   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:48.908255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:48.908312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:48.945260   69333 cri.go:89] found id: ""
	I0927 01:42:48.945291   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.945303   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:48.945310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:48.945375   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:48.983285   69333 cri.go:89] found id: ""
	I0927 01:42:48.983325   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.983333   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:48.983343   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:48.983354   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:49.039437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:49.039472   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:49.053546   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:49.053571   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:49.129264   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:49.129286   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:49.129299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:49.216967   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:49.216999   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:51.758143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:51.771417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:51.771485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:51.806120   69333 cri.go:89] found id: ""
	I0927 01:42:51.806144   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.806154   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:51.806161   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:51.806219   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:51.840301   69333 cri.go:89] found id: ""
	I0927 01:42:51.840330   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.840340   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:51.840348   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:51.840410   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:51.874908   69333 cri.go:89] found id: ""
	I0927 01:42:51.874934   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.874944   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:51.874952   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:51.875018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:51.910960   69333 cri.go:89] found id: ""
	I0927 01:42:51.910988   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.910999   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:51.911006   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:51.911064   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:51.945206   69333 cri.go:89] found id: ""
	I0927 01:42:51.945228   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.945236   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:51.945241   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:51.945289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:51.979262   69333 cri.go:89] found id: ""
	I0927 01:42:51.979296   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.979322   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:51.979328   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:51.979384   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:52.013407   69333 cri.go:89] found id: ""
	I0927 01:42:52.013438   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.013449   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:52.013456   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:52.013510   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:52.048928   69333 cri.go:89] found id: ""
	I0927 01:42:52.048951   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.048961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:52.048970   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:52.048984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:52.101043   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:52.101083   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:52.115903   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:52.115938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:52.197147   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:52.197168   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:52.197184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:52.276352   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:52.276393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:50.021730   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.520362   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:51.306847   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:53.307714   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.042729   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.544118   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.819649   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:54.832262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:54.832344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:54.867495   69333 cri.go:89] found id: ""
	I0927 01:42:54.867523   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.867533   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:54.867539   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:54.867585   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:54.899705   69333 cri.go:89] found id: ""
	I0927 01:42:54.899732   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.899742   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:54.899749   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:54.899817   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:54.939216   69333 cri.go:89] found id: ""
	I0927 01:42:54.939235   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.939244   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:54.939249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:54.939293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:54.976603   69333 cri.go:89] found id: ""
	I0927 01:42:54.976632   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.976643   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:54.976651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:54.976718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:55.011617   69333 cri.go:89] found id: ""
	I0927 01:42:55.011649   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.011660   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:55.011667   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:55.011729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:55.048836   69333 cri.go:89] found id: ""
	I0927 01:42:55.048861   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.048869   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:55.048885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:55.048955   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:55.085105   69333 cri.go:89] found id: ""
	I0927 01:42:55.085133   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.085144   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:55.085151   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:55.085205   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:55.122536   69333 cri.go:89] found id: ""
	I0927 01:42:55.122564   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.122575   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:55.122585   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:55.122600   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:55.197191   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:55.197216   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:55.197230   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:55.275914   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:55.275950   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:55.315043   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:55.315071   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:55.365808   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:55.365846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:55.025083   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.520041   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:55.807377   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.807419   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.808202   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.042511   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.541628   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.880934   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:57.894276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:57.894337   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:57.933299   69333 cri.go:89] found id: ""
	I0927 01:42:57.933326   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.933336   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:57.933343   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:57.933403   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:57.969070   69333 cri.go:89] found id: ""
	I0927 01:42:57.969094   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.969102   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:57.969107   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:57.969151   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:58.009432   69333 cri.go:89] found id: ""
	I0927 01:42:58.009453   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.009462   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:58.009468   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:58.009524   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:58.046507   69333 cri.go:89] found id: ""
	I0927 01:42:58.046526   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.046533   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:58.046539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:58.046603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:58.079910   69333 cri.go:89] found id: ""
	I0927 01:42:58.079936   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.079947   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:58.079954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:58.080015   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:58.115971   69333 cri.go:89] found id: ""
	I0927 01:42:58.115994   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.116001   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:58.116007   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:58.116065   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:58.150512   69333 cri.go:89] found id: ""
	I0927 01:42:58.150536   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.150544   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:58.150549   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:58.150608   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:58.183458   69333 cri.go:89] found id: ""
	I0927 01:42:58.183487   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.183498   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:58.183506   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:58.183520   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:58.234404   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:58.234434   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:58.248387   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:58.248411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:58.320751   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:58.320772   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:58.320783   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:58.401163   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:58.401212   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:00.943677   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:00.956739   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:00.956815   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:00.991020   69333 cri.go:89] found id: ""
	I0927 01:43:00.991042   69333 logs.go:276] 0 containers: []
	W0927 01:43:00.991051   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:00.991056   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:00.991113   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:01.031686   69333 cri.go:89] found id: ""
	I0927 01:43:01.031711   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.031720   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:01.031726   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:01.031786   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:01.068783   69333 cri.go:89] found id: ""
	I0927 01:43:01.068813   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.068824   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:01.068831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:01.068890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:01.108434   69333 cri.go:89] found id: ""
	I0927 01:43:01.108456   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.108464   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:01.108469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:01.108513   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:01.147574   69333 cri.go:89] found id: ""
	I0927 01:43:01.147596   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.147604   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:01.147610   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:01.147660   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:01.188251   69333 cri.go:89] found id: ""
	I0927 01:43:01.188279   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.188290   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:01.188297   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:01.188359   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:01.224901   69333 cri.go:89] found id: ""
	I0927 01:43:01.224944   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.224964   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:01.224974   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:01.225052   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:01.262701   69333 cri.go:89] found id: ""
	I0927 01:43:01.262728   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.262738   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:01.262749   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:01.262762   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:01.313872   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:01.313900   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:01.327809   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:01.327835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:01.400864   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:01.400895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:01.400909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:01.478012   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:01.478045   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:59.520973   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.522457   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:02.308215   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.309111   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.543151   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.043201   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.018634   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:04.032732   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:04.032803   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:04.075258   69333 cri.go:89] found id: ""
	I0927 01:43:04.075285   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.075293   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:04.075299   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:04.075381   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:04.108738   69333 cri.go:89] found id: ""
	I0927 01:43:04.108764   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.108773   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:04.108779   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:04.108835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:04.142115   69333 cri.go:89] found id: ""
	I0927 01:43:04.142145   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.142155   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:04.142174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:04.142249   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:04.184606   69333 cri.go:89] found id: ""
	I0927 01:43:04.184626   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.184634   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:04.184639   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:04.184684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:04.218391   69333 cri.go:89] found id: ""
	I0927 01:43:04.218420   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.218428   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:04.218434   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:04.218482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.253796   69333 cri.go:89] found id: ""
	I0927 01:43:04.253816   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.253824   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:04.253829   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:04.253884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:04.289147   69333 cri.go:89] found id: ""
	I0927 01:43:04.289170   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.289179   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:04.289184   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:04.289245   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:04.329000   69333 cri.go:89] found id: ""
	I0927 01:43:04.329026   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.329034   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:04.329042   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:04.329053   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:04.424255   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:04.424290   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:04.470746   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:04.470775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:04.524208   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:04.524237   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:04.538338   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:04.538365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:04.608713   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.109492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:07.124253   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:07.124332   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:07.160443   69333 cri.go:89] found id: ""
	I0927 01:43:07.160470   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.160481   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:07.160488   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:07.160554   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:07.195492   69333 cri.go:89] found id: ""
	I0927 01:43:07.195515   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.195522   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:07.195527   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:07.195572   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:07.237678   69333 cri.go:89] found id: ""
	I0927 01:43:07.237708   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.237718   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:07.237725   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:07.237792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:07.274239   69333 cri.go:89] found id: ""
	I0927 01:43:07.274268   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.274279   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:07.274286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:07.274352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:07.315099   69333 cri.go:89] found id: ""
	I0927 01:43:07.315124   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.315131   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:07.315137   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:07.315190   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.020911   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.520371   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.807124   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.306568   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.543210   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.042166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:07.356301   69333 cri.go:89] found id: ""
	I0927 01:43:07.356328   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.356339   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:07.356347   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:07.356416   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:07.392204   69333 cri.go:89] found id: ""
	I0927 01:43:07.392232   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.392242   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:07.392255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:07.392312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:07.428924   69333 cri.go:89] found id: ""
	I0927 01:43:07.428952   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.428961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:07.428969   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:07.428981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:07.502507   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.502531   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:07.502545   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:07.584169   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:07.584201   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:07.623413   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:07.623446   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:07.675444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:07.675480   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.190164   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:10.205315   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:10.205395   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:10.244030   69333 cri.go:89] found id: ""
	I0927 01:43:10.244053   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.244063   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:10.244071   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:10.244134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:10.280081   69333 cri.go:89] found id: ""
	I0927 01:43:10.280108   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.280118   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:10.280125   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:10.280184   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:10.315428   69333 cri.go:89] found id: ""
	I0927 01:43:10.315454   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.315464   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:10.315471   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:10.315531   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:10.352536   69333 cri.go:89] found id: ""
	I0927 01:43:10.352560   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.352567   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:10.352574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:10.352634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:10.388846   69333 cri.go:89] found id: ""
	I0927 01:43:10.388870   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.388880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:10.388887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:10.388951   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:10.427746   69333 cri.go:89] found id: ""
	I0927 01:43:10.427771   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.427779   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:10.427784   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:10.427839   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:10.473126   69333 cri.go:89] found id: ""
	I0927 01:43:10.473155   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.473166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:10.473172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:10.473234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:10.511925   69333 cri.go:89] found id: ""
	I0927 01:43:10.511954   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.511962   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:10.511971   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:10.511984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:10.551428   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:10.551459   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:10.603655   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:10.603691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.617232   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:10.617266   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:10.696559   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:10.696585   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:10.696599   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:09.020784   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.521429   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.307081   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.307876   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.043819   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.543289   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.273888   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:13.288271   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:13.288349   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:13.325796   69333 cri.go:89] found id: ""
	I0927 01:43:13.325823   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.325831   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:13.325837   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:13.325893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:13.360721   69333 cri.go:89] found id: ""
	I0927 01:43:13.360748   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.360756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:13.360762   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:13.360821   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:13.399722   69333 cri.go:89] found id: ""
	I0927 01:43:13.399749   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.399756   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:13.399762   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:13.399826   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:13.437161   69333 cri.go:89] found id: ""
	I0927 01:43:13.437187   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.437194   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:13.437200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:13.437260   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:13.474735   69333 cri.go:89] found id: ""
	I0927 01:43:13.474758   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.474766   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:13.474771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:13.474822   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:13.528726   69333 cri.go:89] found id: ""
	I0927 01:43:13.528754   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.528764   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:13.528771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:13.528837   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:13.568617   69333 cri.go:89] found id: ""
	I0927 01:43:13.568642   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.568651   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:13.568658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:13.568726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:13.605820   69333 cri.go:89] found id: ""
	I0927 01:43:13.605846   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.605857   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:13.605868   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:13.605883   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:13.682586   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:13.682609   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:13.682624   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:13.764487   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:13.764522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:13.809248   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:13.809280   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:13.861331   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:13.861371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:16.376981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:16.391787   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:16.391842   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:16.432731   69333 cri.go:89] found id: ""
	I0927 01:43:16.432758   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.432767   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:16.432775   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:16.432836   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:16.466769   69333 cri.go:89] found id: ""
	I0927 01:43:16.466798   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.466806   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:16.466812   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:16.466860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:16.501899   69333 cri.go:89] found id: ""
	I0927 01:43:16.501927   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.501940   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:16.501947   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:16.502000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:16.537356   69333 cri.go:89] found id: ""
	I0927 01:43:16.537383   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.537393   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:16.537401   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:16.537460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:16.573910   69333 cri.go:89] found id: ""
	I0927 01:43:16.573937   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.573946   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:16.573951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:16.574003   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:16.617780   69333 cri.go:89] found id: ""
	I0927 01:43:16.617808   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.617818   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:16.617826   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:16.617884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:16.653262   69333 cri.go:89] found id: ""
	I0927 01:43:16.653311   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.653323   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:16.653331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:16.653394   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:16.689861   69333 cri.go:89] found id: ""
	I0927 01:43:16.689889   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.689898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:16.689909   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:16.689922   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:16.765961   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:16.765986   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:16.766001   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:16.845195   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:16.845227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:16.889159   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:16.889188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:16.945523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:16.945558   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:13.522444   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.021202   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:15.808665   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.307884   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.043071   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.541709   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:19.461132   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:19.475148   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:19.475234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:19.511487   69333 cri.go:89] found id: ""
	I0927 01:43:19.511509   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.511517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:19.511522   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:19.511580   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:19.545726   69333 cri.go:89] found id: ""
	I0927 01:43:19.545750   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.545756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:19.545763   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:19.545830   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:19.581287   69333 cri.go:89] found id: ""
	I0927 01:43:19.581310   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.581318   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:19.581323   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:19.581376   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:19.614179   69333 cri.go:89] found id: ""
	I0927 01:43:19.614205   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.614215   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:19.614223   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:19.614286   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:19.648276   69333 cri.go:89] found id: ""
	I0927 01:43:19.648307   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.648318   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:19.648330   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:19.648390   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:19.683051   69333 cri.go:89] found id: ""
	I0927 01:43:19.683083   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.683094   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:19.683114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:19.683166   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:19.716664   69333 cri.go:89] found id: ""
	I0927 01:43:19.716686   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.716694   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:19.716700   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:19.716745   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:19.758948   69333 cri.go:89] found id: ""
	I0927 01:43:19.758969   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.758976   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:19.758984   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:19.758996   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:19.797751   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:19.797777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:19.853605   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:19.853635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:19.867785   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:19.867815   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:19.950323   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:19.950350   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:19.950363   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:18.520291   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.520845   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.520886   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.808171   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.812047   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:21.043160   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:23.546462   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.555421   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:22.570013   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:22.570077   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:22.605007   69333 cri.go:89] found id: ""
	I0927 01:43:22.605034   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.605055   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:22.605062   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:22.605122   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:22.640350   69333 cri.go:89] found id: ""
	I0927 01:43:22.640381   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.640391   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:22.640406   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:22.640482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:22.677464   69333 cri.go:89] found id: ""
	I0927 01:43:22.677489   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.677499   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:22.677506   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:22.677567   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:22.721978   69333 cri.go:89] found id: ""
	I0927 01:43:22.722017   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.722025   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:22.722032   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:22.722093   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:22.757694   69333 cri.go:89] found id: ""
	I0927 01:43:22.757720   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.757729   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:22.757733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:22.757781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:22.793872   69333 cri.go:89] found id: ""
	I0927 01:43:22.793903   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.793912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:22.793920   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:22.793971   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:22.830620   69333 cri.go:89] found id: ""
	I0927 01:43:22.830652   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.830662   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:22.830669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:22.830732   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:22.867341   69333 cri.go:89] found id: ""
	I0927 01:43:22.867370   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.867381   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:22.867392   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:22.867405   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:22.939592   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:22.939630   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:22.939654   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:23.016407   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:23.016447   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:23.058490   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:23.058522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:23.109527   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:23.109560   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:25.626109   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:25.645254   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:25.645343   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:25.707951   69333 cri.go:89] found id: ""
	I0927 01:43:25.707979   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.707989   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:25.707997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:25.708057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:25.771210   69333 cri.go:89] found id: ""
	I0927 01:43:25.771234   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.771242   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:25.771248   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:25.771295   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:25.808206   69333 cri.go:89] found id: ""
	I0927 01:43:25.808235   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.808245   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:25.808252   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:25.808311   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:25.842236   69333 cri.go:89] found id: ""
	I0927 01:43:25.842265   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.842275   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:25.842283   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:25.842328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:25.879220   69333 cri.go:89] found id: ""
	I0927 01:43:25.879248   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.879256   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:25.879262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:25.879333   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:25.913491   69333 cri.go:89] found id: ""
	I0927 01:43:25.913522   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.913532   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:25.913537   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:25.913595   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:25.946867   69333 cri.go:89] found id: ""
	I0927 01:43:25.946887   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.946894   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:25.946899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:25.946943   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:25.983792   69333 cri.go:89] found id: ""
	I0927 01:43:25.983813   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.983820   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:25.983828   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:25.983838   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:26.030169   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:26.030195   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:26.083242   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:26.083276   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:26.097109   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:26.097136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:26.168675   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:26.168703   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:26.168715   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:24.521923   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.020053   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:25.308150   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.308307   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:29.308818   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:26.042436   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.541895   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:30.542444   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.750349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:28.765211   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:28.765269   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:28.804760   69333 cri.go:89] found id: ""
	I0927 01:43:28.804784   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.804792   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:28.804798   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:28.804865   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:28.842576   69333 cri.go:89] found id: ""
	I0927 01:43:28.842597   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.842604   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:28.842612   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:28.842674   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:28.877498   69333 cri.go:89] found id: ""
	I0927 01:43:28.877529   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.877541   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:28.877553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:28.877615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:28.912583   69333 cri.go:89] found id: ""
	I0927 01:43:28.912609   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.912620   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:28.912627   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:28.912689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:28.947995   69333 cri.go:89] found id: ""
	I0927 01:43:28.948019   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.948030   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:28.948037   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:28.948135   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:28.984445   69333 cri.go:89] found id: ""
	I0927 01:43:28.984470   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.984480   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:28.984488   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:28.984551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:29.020345   69333 cri.go:89] found id: ""
	I0927 01:43:29.020374   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.020385   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:29.020392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:29.020451   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:29.056204   69333 cri.go:89] found id: ""
	I0927 01:43:29.056234   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.056245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:29.056257   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:29.056270   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:29.127936   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:29.127963   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:29.127980   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:29.205933   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:29.205981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.248745   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:29.248777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:29.302316   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:29.302348   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:31.817566   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:31.831179   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:31.831253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:31.868480   69333 cri.go:89] found id: ""
	I0927 01:43:31.868507   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.868517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:31.868528   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:31.868588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:31.901656   69333 cri.go:89] found id: ""
	I0927 01:43:31.901684   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.901694   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:31.901701   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:31.901761   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:31.937101   69333 cri.go:89] found id: ""
	I0927 01:43:31.937133   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.937145   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:31.937153   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:31.937210   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:31.970724   69333 cri.go:89] found id: ""
	I0927 01:43:31.970750   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.970761   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:31.970768   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:31.970835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:32.003704   69333 cri.go:89] found id: ""
	I0927 01:43:32.003736   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.003747   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:32.003754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:32.003813   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:32.038840   69333 cri.go:89] found id: ""
	I0927 01:43:32.038869   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.038879   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:32.038886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:32.038946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:32.075506   69333 cri.go:89] found id: ""
	I0927 01:43:32.075534   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.075545   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:32.075552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:32.075603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:32.112983   69333 cri.go:89] found id: ""
	I0927 01:43:32.113009   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.113020   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:32.113031   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:32.113046   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:32.168192   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:32.168227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:32.182702   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:32.182727   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:32.255797   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:32.255824   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:32.255835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:32.336083   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:32.336115   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.022764   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.520495   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.308851   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.807870   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.041600   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:34.880981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:34.894904   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:34.894976   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:34.933459   69333 cri.go:89] found id: ""
	I0927 01:43:34.933482   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.933490   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:34.933498   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:34.933555   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:34.966893   69333 cri.go:89] found id: ""
	I0927 01:43:34.966917   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.966926   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:34.966933   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:34.966992   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:35.002878   69333 cri.go:89] found id: ""
	I0927 01:43:35.002899   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.002907   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:35.002912   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:35.002970   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:35.039871   69333 cri.go:89] found id: ""
	I0927 01:43:35.039898   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.039908   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:35.039915   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:35.039977   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:35.078229   69333 cri.go:89] found id: ""
	I0927 01:43:35.078255   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.078267   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:35.078274   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:35.078342   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:35.114369   69333 cri.go:89] found id: ""
	I0927 01:43:35.114397   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.114408   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:35.114415   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:35.114475   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:35.148072   69333 cri.go:89] found id: ""
	I0927 01:43:35.148100   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.148110   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:35.148117   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:35.148188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:35.184020   69333 cri.go:89] found id: ""
	I0927 01:43:35.184051   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.184062   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:35.184073   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:35.184086   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:35.197332   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:35.197355   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:35.273860   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:35.273889   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:35.273904   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:35.354647   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:35.354680   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:35.392622   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:35.392651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:33.521889   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:36.020067   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.021354   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.808365   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.307251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.541793   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.043418   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.943024   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:37.957265   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:37.957329   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:37.991294   69333 cri.go:89] found id: ""
	I0927 01:43:37.991348   69333 logs.go:276] 0 containers: []
	W0927 01:43:37.991362   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:37.991368   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:37.991421   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:38.026960   69333 cri.go:89] found id: ""
	I0927 01:43:38.026981   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.026990   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:38.026998   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:38.027057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:38.063540   69333 cri.go:89] found id: ""
	I0927 01:43:38.063563   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.063571   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:38.063576   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:38.063627   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:38.099554   69333 cri.go:89] found id: ""
	I0927 01:43:38.099602   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.099613   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:38.099621   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:38.099689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:38.136576   69333 cri.go:89] found id: ""
	I0927 01:43:38.136604   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.136615   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:38.136623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:38.136676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:38.170411   69333 cri.go:89] found id: ""
	I0927 01:43:38.170441   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.170452   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:38.170458   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:38.170512   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:38.211902   69333 cri.go:89] found id: ""
	I0927 01:43:38.211934   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.211945   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:38.211951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:38.212007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:38.247850   69333 cri.go:89] found id: ""
	I0927 01:43:38.247875   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.247885   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:38.247895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:38.247913   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:38.329353   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:38.329384   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:38.369114   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:38.369148   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:38.420578   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:38.420613   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:38.434019   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:38.434050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:38.517921   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.018609   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:41.032308   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:41.032370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:41.068491   69333 cri.go:89] found id: ""
	I0927 01:43:41.068518   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.068529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:41.068536   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:41.068597   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:41.106527   69333 cri.go:89] found id: ""
	I0927 01:43:41.106555   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.106565   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:41.106571   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:41.106634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:41.142846   69333 cri.go:89] found id: ""
	I0927 01:43:41.142870   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.142880   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:41.142887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:41.142949   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:41.187499   69333 cri.go:89] found id: ""
	I0927 01:43:41.187525   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.187536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:41.187544   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:41.187606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:41.226040   69333 cri.go:89] found id: ""
	I0927 01:43:41.226063   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.226070   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:41.226076   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:41.226153   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:41.261399   69333 cri.go:89] found id: ""
	I0927 01:43:41.261429   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.261440   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:41.261446   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:41.261493   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:41.300709   69333 cri.go:89] found id: ""
	I0927 01:43:41.300730   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.300737   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:41.300743   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:41.300799   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:41.335725   69333 cri.go:89] found id: ""
	I0927 01:43:41.335751   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.335759   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:41.335767   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:41.335776   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:41.387756   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:41.387788   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:41.401717   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:41.401743   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:41.479524   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.479548   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:41.479562   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:41.559926   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:41.559959   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:40.520642   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.521344   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.307769   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.807328   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.541384   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.548925   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.107615   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:44.122628   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:44.122690   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:44.163496   69333 cri.go:89] found id: ""
	I0927 01:43:44.163521   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.163529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:44.163541   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:44.163588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:44.203488   69333 cri.go:89] found id: ""
	I0927 01:43:44.203519   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.203529   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:44.203535   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:44.203600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:44.238111   69333 cri.go:89] found id: ""
	I0927 01:43:44.238141   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.238148   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:44.238154   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:44.238221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:44.272954   69333 cri.go:89] found id: ""
	I0927 01:43:44.272981   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.272991   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:44.272998   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:44.273057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:44.309700   69333 cri.go:89] found id: ""
	I0927 01:43:44.309719   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.309726   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:44.309731   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:44.309776   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:44.344532   69333 cri.go:89] found id: ""
	I0927 01:43:44.344563   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.344573   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:44.344580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:44.344641   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:44.379354   69333 cri.go:89] found id: ""
	I0927 01:43:44.379380   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.379391   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:44.379399   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:44.379461   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:44.415297   69333 cri.go:89] found id: ""
	I0927 01:43:44.415344   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.415356   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:44.415366   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:44.415381   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:44.468570   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:44.468602   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:44.483419   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:44.483453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:44.560718   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:44.560737   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:44.560753   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:44.641130   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:44.641173   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.188520   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:47.202189   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:47.202262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:47.243051   69333 cri.go:89] found id: ""
	I0927 01:43:47.243075   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.243083   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:47.243089   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:47.243155   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:47.280071   69333 cri.go:89] found id: ""
	I0927 01:43:47.280094   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.280104   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:47.280111   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:47.280170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:47.318458   69333 cri.go:89] found id: ""
	I0927 01:43:47.318482   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.318492   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:47.318499   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:47.318551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:45.023799   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.522945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:45.307910   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.309781   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.807329   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.041371   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.042307   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.352891   69333 cri.go:89] found id: ""
	I0927 01:43:47.352916   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.352926   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:47.352933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:47.352997   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:47.387534   69333 cri.go:89] found id: ""
	I0927 01:43:47.387560   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.387569   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:47.387578   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:47.387646   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:47.422221   69333 cri.go:89] found id: ""
	I0927 01:43:47.422254   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.422265   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:47.422273   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:47.422330   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:47.459624   69333 cri.go:89] found id: ""
	I0927 01:43:47.459645   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.459653   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:47.459659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:47.459706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:47.494322   69333 cri.go:89] found id: ""
	I0927 01:43:47.494347   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.494355   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:47.494363   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:47.494375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:47.508031   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:47.508056   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:47.583920   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:47.583952   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:47.583968   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:47.665533   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:47.665568   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.708423   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:47.708455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.261602   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:50.275548   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:50.275607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:50.311583   69333 cri.go:89] found id: ""
	I0927 01:43:50.311610   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.311620   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:50.311627   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:50.311687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:50.347686   69333 cri.go:89] found id: ""
	I0927 01:43:50.347709   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.347721   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:50.347729   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:50.347778   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:50.386627   69333 cri.go:89] found id: ""
	I0927 01:43:50.386654   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.386663   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:50.386669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:50.386719   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:50.421512   69333 cri.go:89] found id: ""
	I0927 01:43:50.421538   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.421547   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:50.421552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:50.421603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:50.461849   69333 cri.go:89] found id: ""
	I0927 01:43:50.461872   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.461880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:50.461885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:50.461941   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:50.496517   69333 cri.go:89] found id: ""
	I0927 01:43:50.496540   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.496548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:50.496554   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:50.496600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:50.532595   69333 cri.go:89] found id: ""
	I0927 01:43:50.532619   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.532630   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:50.532638   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:50.532687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:50.573213   69333 cri.go:89] found id: ""
	I0927 01:43:50.573241   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.573252   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:50.573262   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:50.573275   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.625600   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:50.625633   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:50.639512   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:50.639535   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:50.708393   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:50.708415   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:50.708436   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:50.789812   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:50.789845   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:50.020837   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:52.021314   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.807713   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:54.308918   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.541348   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.542994   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.335858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:53.349369   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:53.349441   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:53.386922   69333 cri.go:89] found id: ""
	I0927 01:43:53.386947   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.386955   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:53.386961   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:53.387007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:53.423614   69333 cri.go:89] found id: ""
	I0927 01:43:53.423640   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.423651   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:53.423658   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:53.423721   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:53.463245   69333 cri.go:89] found id: ""
	I0927 01:43:53.463265   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.463273   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:53.463280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:53.463344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:53.502093   69333 cri.go:89] found id: ""
	I0927 01:43:53.502123   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.502133   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:53.502140   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:53.502196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:53.538616   69333 cri.go:89] found id: ""
	I0927 01:43:53.538641   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.538652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:53.538659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:53.538716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:53.578580   69333 cri.go:89] found id: ""
	I0927 01:43:53.578609   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.578617   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:53.578623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:53.578685   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:53.615240   69333 cri.go:89] found id: ""
	I0927 01:43:53.615266   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.615275   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:53.615282   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:53.615356   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:53.650987   69333 cri.go:89] found id: ""
	I0927 01:43:53.651011   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.651019   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:53.651028   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:53.651038   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:53.664817   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:53.664841   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:53.737875   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:53.737894   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:53.737909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:53.827293   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:53.827345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:53.867157   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:53.867188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:56.423435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:56.437837   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:56.437912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:56.480328   69333 cri.go:89] found id: ""
	I0927 01:43:56.480349   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.480357   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:56.480364   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:56.480427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:56.520627   69333 cri.go:89] found id: ""
	I0927 01:43:56.520651   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.520660   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:56.520667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:56.520726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:56.561527   69333 cri.go:89] found id: ""
	I0927 01:43:56.561555   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.561567   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:56.561574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:56.561634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:56.598751   69333 cri.go:89] found id: ""
	I0927 01:43:56.598783   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.598794   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:56.598801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:56.598861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:56.634378   69333 cri.go:89] found id: ""
	I0927 01:43:56.634410   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.634422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:56.634429   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:56.634489   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:56.669819   69333 cri.go:89] found id: ""
	I0927 01:43:56.669852   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.669863   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:56.669877   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:56.669929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:56.703715   69333 cri.go:89] found id: ""
	I0927 01:43:56.703740   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.703750   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:56.703757   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:56.703820   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:56.737208   69333 cri.go:89] found id: ""
	I0927 01:43:56.737234   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.737245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:56.737255   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:56.737269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:56.749933   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:56.749960   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:56.822331   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:56.822353   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:56.822369   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:56.904415   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:56.904454   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:56.947108   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:56.947136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:54.521004   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.807935   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.808046   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.041831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.042496   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:00.542924   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:59.500580   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:59.523807   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:59.523888   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:59.562931   69333 cri.go:89] found id: ""
	I0927 01:43:59.562955   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.562963   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:59.562968   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:59.563013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:59.599321   69333 cri.go:89] found id: ""
	I0927 01:43:59.599348   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.599358   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:59.599363   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:59.599418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:59.634404   69333 cri.go:89] found id: ""
	I0927 01:43:59.634431   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.634441   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:59.634448   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:59.634498   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:59.672022   69333 cri.go:89] found id: ""
	I0927 01:43:59.672052   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.672066   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:59.672074   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:59.672134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:59.704617   69333 cri.go:89] found id: ""
	I0927 01:43:59.704647   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.704657   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:59.704664   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:59.704712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:59.740479   69333 cri.go:89] found id: ""
	I0927 01:43:59.740504   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.740512   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:59.740517   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:59.740579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:59.777123   69333 cri.go:89] found id: ""
	I0927 01:43:59.777155   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.777166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:59.777174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:59.777234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:59.817780   69333 cri.go:89] found id: ""
	I0927 01:43:59.817803   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.817825   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:59.817841   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:59.817856   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:59.831252   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:59.831278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:59.901912   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:59.901936   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:59.901949   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:59.983001   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:59.983034   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:00.030989   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:00.031020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:59.020139   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.020925   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.306853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.308075   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.042494   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.043814   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:02.583949   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:02.596723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:02.596798   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:02.630927   69333 cri.go:89] found id: ""
	I0927 01:44:02.630953   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.630962   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:02.630967   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:02.631012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:02.664156   69333 cri.go:89] found id: ""
	I0927 01:44:02.664186   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.664198   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:02.664205   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:02.664259   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:02.698823   69333 cri.go:89] found id: ""
	I0927 01:44:02.698847   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.698860   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:02.698865   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:02.698913   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:02.736114   69333 cri.go:89] found id: ""
	I0927 01:44:02.736142   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.736154   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:02.736161   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:02.736221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:02.769739   69333 cri.go:89] found id: ""
	I0927 01:44:02.769763   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.769771   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:02.769785   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:02.769844   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:02.804798   69333 cri.go:89] found id: ""
	I0927 01:44:02.804871   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.804887   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:02.804898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:02.804958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:02.841197   69333 cri.go:89] found id: ""
	I0927 01:44:02.841224   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.841236   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:02.841243   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:02.841301   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:02.881278   69333 cri.go:89] found id: ""
	I0927 01:44:02.881310   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.881321   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:02.881331   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:02.881345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:02.935149   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:02.935183   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:02.950245   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:02.950273   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:03.020241   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:03.020263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:03.020277   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:03.104467   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:03.104503   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:05.643070   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:05.656656   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:05.656716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:05.694022   69333 cri.go:89] found id: ""
	I0927 01:44:05.694045   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.694053   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:05.694059   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:05.694123   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:05.728575   69333 cri.go:89] found id: ""
	I0927 01:44:05.728600   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.728607   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:05.728613   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:05.728667   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:05.768546   69333 cri.go:89] found id: ""
	I0927 01:44:05.768572   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.768583   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:05.768590   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:05.768652   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:05.809504   69333 cri.go:89] found id: ""
	I0927 01:44:05.809527   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.809536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:05.809543   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:05.809600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:05.846387   69333 cri.go:89] found id: ""
	I0927 01:44:05.846415   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.846422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:05.846428   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:05.846479   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:05.879579   69333 cri.go:89] found id: ""
	I0927 01:44:05.879608   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.879619   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:05.879626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:05.879684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:05.928932   69333 cri.go:89] found id: ""
	I0927 01:44:05.928961   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.928970   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:05.928978   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:05.929037   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:05.986463   69333 cri.go:89] found id: ""
	I0927 01:44:05.986490   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.986499   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:05.986507   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:05.986521   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:06.039984   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:06.040011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:06.053025   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:06.053051   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:06.127277   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:06.127316   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:06.127330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:06.201473   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:06.201504   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:03.520539   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:06.021584   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.808474   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.307407   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:07.542959   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.042223   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.739339   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:08.753354   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:08.753418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:08.788513   69333 cri.go:89] found id: ""
	I0927 01:44:08.788544   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.788556   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:08.788563   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:08.788648   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:08.824615   69333 cri.go:89] found id: ""
	I0927 01:44:08.824642   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.824653   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:08.824661   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:08.824724   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:08.858327   69333 cri.go:89] found id: ""
	I0927 01:44:08.858354   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.858365   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:08.858372   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:08.858430   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:08.896140   69333 cri.go:89] found id: ""
	I0927 01:44:08.896168   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.896175   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:08.896181   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:08.896229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:08.931525   69333 cri.go:89] found id: ""
	I0927 01:44:08.931547   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.931554   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:08.931560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:08.931618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:08.970224   69333 cri.go:89] found id: ""
	I0927 01:44:08.970246   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.970256   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:08.970263   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:08.970331   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:09.007213   69333 cri.go:89] found id: ""
	I0927 01:44:09.007240   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.007248   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:09.007255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:09.007334   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:09.043078   69333 cri.go:89] found id: ""
	I0927 01:44:09.043111   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.043122   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:09.043132   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:09.043147   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:09.096768   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:09.096801   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:09.110721   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:09.110747   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:09.182966   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:09.182990   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:09.183004   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:09.259497   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:09.259541   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:11.797307   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:11.812141   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:11.812196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:11.846429   69333 cri.go:89] found id: ""
	I0927 01:44:11.846468   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.846482   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:11.846489   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:11.846598   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:11.885294   69333 cri.go:89] found id: ""
	I0927 01:44:11.885322   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.885333   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:11.885339   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:11.885398   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:11.920856   69333 cri.go:89] found id: ""
	I0927 01:44:11.920884   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.920892   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:11.920898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:11.920946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:11.964540   69333 cri.go:89] found id: ""
	I0927 01:44:11.964564   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.964574   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:11.964581   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:11.964634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:12.000596   69333 cri.go:89] found id: ""
	I0927 01:44:12.000619   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.000629   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:12.000636   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:12.000697   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:12.037773   69333 cri.go:89] found id: ""
	I0927 01:44:12.037808   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.037819   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:12.037831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:12.037893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:12.074646   69333 cri.go:89] found id: ""
	I0927 01:44:12.074676   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.074687   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:12.074692   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:12.074740   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:12.111771   69333 cri.go:89] found id: ""
	I0927 01:44:12.111802   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.111813   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:12.111824   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:12.111837   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:12.160938   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:12.160971   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:12.175576   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:12.175605   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:12.245227   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:12.245263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:12.245278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:12.325161   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:12.325194   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:08.520111   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.520326   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.520755   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.808039   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.808843   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.042905   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.542272   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.867795   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:14.881053   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:14.881130   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:14.915193   69333 cri.go:89] found id: ""
	I0927 01:44:14.915224   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.915234   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:14.915241   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:14.915318   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:14.951758   69333 cri.go:89] found id: ""
	I0927 01:44:14.951789   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.951801   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:14.951808   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:14.951860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:14.987875   69333 cri.go:89] found id: ""
	I0927 01:44:14.987906   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.987917   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:14.987924   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:14.987985   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:15.025780   69333 cri.go:89] found id: ""
	I0927 01:44:15.025810   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.025820   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:15.025828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:15.025884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:15.062135   69333 cri.go:89] found id: ""
	I0927 01:44:15.062157   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.062165   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:15.062172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:15.062225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:15.097090   69333 cri.go:89] found id: ""
	I0927 01:44:15.097112   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.097119   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:15.097126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:15.097170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:15.130528   69333 cri.go:89] found id: ""
	I0927 01:44:15.130552   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.130561   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:15.130569   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:15.130615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:15.165422   69333 cri.go:89] found id: ""
	I0927 01:44:15.165450   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.165457   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:15.165465   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:15.165474   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:15.214612   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:15.214651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:15.230294   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:15.230318   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:15.303339   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:15.303362   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:15.303375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:15.382046   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:15.382081   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:14.520833   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.021034   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:15.308397   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.808221   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:16.542334   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:18.543785   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.923331   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:17.937693   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:17.937765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:17.972677   69333 cri.go:89] found id: ""
	I0927 01:44:17.972699   69333 logs.go:276] 0 containers: []
	W0927 01:44:17.972707   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:17.972714   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:17.972764   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:18.004818   69333 cri.go:89] found id: ""
	I0927 01:44:18.004846   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.004854   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:18.004860   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:18.004907   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:18.044693   69333 cri.go:89] found id: ""
	I0927 01:44:18.044716   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.044723   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:18.044728   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:18.044772   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:18.079205   69333 cri.go:89] found id: ""
	I0927 01:44:18.079235   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.079244   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:18.079249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:18.079299   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:18.115272   69333 cri.go:89] found id: ""
	I0927 01:44:18.115322   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.115335   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:18.115343   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:18.115412   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:18.150165   69333 cri.go:89] found id: ""
	I0927 01:44:18.150195   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.150206   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:18.150213   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:18.150275   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:18.184971   69333 cri.go:89] found id: ""
	I0927 01:44:18.184999   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.185009   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:18.185016   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:18.185083   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:18.219955   69333 cri.go:89] found id: ""
	I0927 01:44:18.219985   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.219997   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:18.220008   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:18.220020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:18.269713   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:18.269748   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:18.285224   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:18.285251   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:18.364887   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:18.364912   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:18.364927   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:18.450667   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:18.450706   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:20.991648   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:21.006472   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:21.006529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:21.043455   69333 cri.go:89] found id: ""
	I0927 01:44:21.043476   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.043486   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:21.043493   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:21.043549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:21.080365   69333 cri.go:89] found id: ""
	I0927 01:44:21.080391   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.080399   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:21.080405   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:21.080449   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:21.117576   69333 cri.go:89] found id: ""
	I0927 01:44:21.117624   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.117636   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:21.117642   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:21.117703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:21.154538   69333 cri.go:89] found id: ""
	I0927 01:44:21.154564   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.154576   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:21.154584   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:21.154638   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:21.190046   69333 cri.go:89] found id: ""
	I0927 01:44:21.190070   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.190080   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:21.190086   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:21.190147   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:21.226383   69333 cri.go:89] found id: ""
	I0927 01:44:21.226407   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.226417   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:21.226424   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:21.226485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:21.262090   69333 cri.go:89] found id: ""
	I0927 01:44:21.262113   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.262124   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:21.262132   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:21.262188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:21.297675   69333 cri.go:89] found id: ""
	I0927 01:44:21.297697   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.297706   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:21.297716   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:21.297728   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:21.349668   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:21.349705   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:21.364608   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:21.364635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:21.432570   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:21.432596   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:21.432612   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:21.507616   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:21.507661   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:19.520792   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.521341   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:20.307600   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:22.308557   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.807578   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.041736   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:23.041809   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:25.540974   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.054212   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:24.067954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:24.068014   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:24.107017   69333 cri.go:89] found id: ""
	I0927 01:44:24.107045   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.107056   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:24.107063   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:24.107124   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:24.144373   69333 cri.go:89] found id: ""
	I0927 01:44:24.144398   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.144406   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:24.144411   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:24.144473   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:24.180010   69333 cri.go:89] found id: ""
	I0927 01:44:24.180038   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.180048   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:24.180056   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:24.180118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:24.214387   69333 cri.go:89] found id: ""
	I0927 01:44:24.214413   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.214421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:24.214426   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:24.214472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:24.252597   69333 cri.go:89] found id: ""
	I0927 01:44:24.252623   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.252631   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:24.252643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:24.252705   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.292044   69333 cri.go:89] found id: ""
	I0927 01:44:24.292072   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.292082   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:24.292089   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:24.292158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:24.329899   69333 cri.go:89] found id: ""
	I0927 01:44:24.329924   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.329934   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:24.329940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:24.329998   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:24.367964   69333 cri.go:89] found id: ""
	I0927 01:44:24.367989   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.368000   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:24.368010   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:24.368025   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:24.384151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:24.384184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:24.456916   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:24.456940   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:24.456958   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:24.539362   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:24.539399   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:24.578384   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:24.578411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.132700   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:27.146218   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.146294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.180958   69333 cri.go:89] found id: ""
	I0927 01:44:27.180984   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.180992   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:27.180997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.181043   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.215213   69333 cri.go:89] found id: ""
	I0927 01:44:27.215236   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.215243   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:27.215249   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.215293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.258192   69333 cri.go:89] found id: ""
	I0927 01:44:27.258216   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.258226   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:27.258233   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.258289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.292717   69333 cri.go:89] found id: ""
	I0927 01:44:27.292742   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.292753   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:27.292760   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.292818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.328038   69333 cri.go:89] found id: ""
	I0927 01:44:27.328066   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.328076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:27.328083   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.328152   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.021885   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:26.520726   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.308923   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:29.807825   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.542683   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.042293   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.363513   69333 cri.go:89] found id: ""
	I0927 01:44:27.363539   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.363548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:27.363553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.363610   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.402201   69333 cri.go:89] found id: ""
	I0927 01:44:27.402223   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.402231   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:27.402237   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.402290   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.436952   69333 cri.go:89] found id: ""
	I0927 01:44:27.436979   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.436987   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:27.436995   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.437009   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.487908   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:27.487938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:27.502170   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:27.502199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:27.583909   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:27.583931   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:27.583943   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:27.660248   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:27.660286   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:30.201211   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:30.214276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:30.214350   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:30.252445   69333 cri.go:89] found id: ""
	I0927 01:44:30.252474   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.252484   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:30.252490   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:30.252538   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:30.287574   69333 cri.go:89] found id: ""
	I0927 01:44:30.287603   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.287614   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:30.287621   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:30.287693   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:30.324674   69333 cri.go:89] found id: ""
	I0927 01:44:30.324699   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.324711   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:30.324718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:30.324779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:30.360493   69333 cri.go:89] found id: ""
	I0927 01:44:30.360521   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.360531   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:30.360539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:30.360640   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:30.396219   69333 cri.go:89] found id: ""
	I0927 01:44:30.396252   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.396263   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:30.396270   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:30.396328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:30.431524   69333 cri.go:89] found id: ""
	I0927 01:44:30.431546   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.431558   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:30.431564   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:30.431607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:30.465887   69333 cri.go:89] found id: ""
	I0927 01:44:30.465915   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.465926   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:30.465933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:30.466000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:30.501364   69333 cri.go:89] found id: ""
	I0927 01:44:30.501391   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.501402   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:30.501411   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:30.501425   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:30.556344   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:30.556377   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:30.572619   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:30.572649   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:30.645996   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:30.646020   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:30.646032   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:30.737458   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:30.737531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:28.521312   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.521421   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.020699   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:31.807949   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.809414   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:32.045244   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:34.542035   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.284306   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:33.298164   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:33.298224   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:33.334599   69333 cri.go:89] found id: ""
	I0927 01:44:33.334625   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.334634   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:33.334654   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:33.334718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:33.369006   69333 cri.go:89] found id: ""
	I0927 01:44:33.369034   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.369044   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:33.369051   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:33.369119   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:33.407875   69333 cri.go:89] found id: ""
	I0927 01:44:33.407904   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.407912   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:33.407918   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:33.407974   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:33.441048   69333 cri.go:89] found id: ""
	I0927 01:44:33.441083   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.441094   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:33.441101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:33.441156   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:33.478458   69333 cri.go:89] found id: ""
	I0927 01:44:33.478503   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.478515   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:33.478522   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:33.478586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:33.513756   69333 cri.go:89] found id: ""
	I0927 01:44:33.513784   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.513795   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:33.513802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:33.513862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:33.554351   69333 cri.go:89] found id: ""
	I0927 01:44:33.554392   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.554403   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:33.554410   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:33.554472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:33.588484   69333 cri.go:89] found id: ""
	I0927 01:44:33.588512   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.588533   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:33.588544   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:33.588559   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:33.665735   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:33.665775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:33.704654   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:33.704687   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:33.755444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:33.755475   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:33.770069   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:33.770095   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:33.841531   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.341963   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:36.355219   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:36.355294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:36.395149   69333 cri.go:89] found id: ""
	I0927 01:44:36.395185   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.395196   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:36.395203   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:36.395262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:36.434620   69333 cri.go:89] found id: ""
	I0927 01:44:36.434649   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.434661   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:36.434667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:36.434729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:36.468328   69333 cri.go:89] found id: ""
	I0927 01:44:36.468349   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.468357   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:36.468362   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:36.468427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:36.506386   69333 cri.go:89] found id: ""
	I0927 01:44:36.506413   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.506421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:36.506427   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:36.506482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:36.546583   69333 cri.go:89] found id: ""
	I0927 01:44:36.546607   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.546614   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:36.546620   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:36.546665   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:36.581694   69333 cri.go:89] found id: ""
	I0927 01:44:36.581721   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.581730   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:36.581737   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:36.581782   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:36.617775   69333 cri.go:89] found id: ""
	I0927 01:44:36.617799   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.617807   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:36.617813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:36.617877   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:36.654443   69333 cri.go:89] found id: ""
	I0927 01:44:36.654470   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.654478   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:36.654486   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:36.654496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:36.705787   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:36.705817   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:36.720643   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:36.720677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:36.800037   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.800061   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:36.800091   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:36.886845   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:36.886884   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:35.023634   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.520794   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:36.307516   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:38.307899   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.041620   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.044257   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.429349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:39.442899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:39.442973   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:39.481752   69333 cri.go:89] found id: ""
	I0927 01:44:39.481782   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.481793   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:39.481799   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:39.481858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:39.516074   69333 cri.go:89] found id: ""
	I0927 01:44:39.516103   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.516114   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:39.516130   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:39.516188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.563351   69333 cri.go:89] found id: ""
	I0927 01:44:39.563375   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.563386   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:39.563392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.563455   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.601417   69333 cri.go:89] found id: ""
	I0927 01:44:39.601445   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.601455   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:39.601469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.601529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.634537   69333 cri.go:89] found id: ""
	I0927 01:44:39.634565   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.634576   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:39.634582   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.634642   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.668910   69333 cri.go:89] found id: ""
	I0927 01:44:39.668937   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.668948   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:39.668955   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.669013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.701992   69333 cri.go:89] found id: ""
	I0927 01:44:39.702014   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.702021   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:39.702027   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.702074   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.741579   69333 cri.go:89] found id: ""
	I0927 01:44:39.741601   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.741610   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:39.741618   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:39.741627   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:39.806476   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.806510   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.820228   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:39.820255   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:39.893137   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:39.893167   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.893181   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.974477   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.974514   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:40.021226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.521217   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:40.309154   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.808724   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:41.542308   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:44.042015   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.517449   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:42.532200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:42.532266   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:42.568872   69333 cri.go:89] found id: ""
	I0927 01:44:42.568901   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.568911   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:42.568919   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:42.568980   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:42.605069   69333 cri.go:89] found id: ""
	I0927 01:44:42.605220   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.605251   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:42.605261   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:42.605335   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:42.641637   69333 cri.go:89] found id: ""
	I0927 01:44:42.641665   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.641673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:42.641680   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:42.641742   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:42.677333   69333 cri.go:89] found id: ""
	I0927 01:44:42.677361   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.677376   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:42.677382   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:42.677439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:42.712456   69333 cri.go:89] found id: ""
	I0927 01:44:42.712484   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.712495   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:42.712501   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:42.712565   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:42.745109   69333 cri.go:89] found id: ""
	I0927 01:44:42.745140   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.745150   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:42.745157   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:42.745226   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:42.779427   69333 cri.go:89] found id: ""
	I0927 01:44:42.779449   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.779457   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:42.779462   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:42.779508   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:42.823920   69333 cri.go:89] found id: ""
	I0927 01:44:42.823946   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.823954   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:42.823963   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:42.823972   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:42.881345   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:42.881380   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:42.896076   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:42.896100   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:42.971775   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:42.971796   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:42.971809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:43.054461   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:43.054494   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.596681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:45.610817   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:45.610882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:45.647628   69333 cri.go:89] found id: ""
	I0927 01:44:45.647654   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.647662   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:45.647668   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:45.647715   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:45.685480   69333 cri.go:89] found id: ""
	I0927 01:44:45.685507   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.685514   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:45.685520   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:45.685573   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:45.721601   69333 cri.go:89] found id: ""
	I0927 01:44:45.721624   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.721632   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:45.721637   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:45.721700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:45.756763   69333 cri.go:89] found id: ""
	I0927 01:44:45.756788   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.756796   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:45.756802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:45.756858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:45.792891   69333 cri.go:89] found id: ""
	I0927 01:44:45.792917   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.792927   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:45.792934   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:45.792996   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:45.828716   69333 cri.go:89] found id: ""
	I0927 01:44:45.828739   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.828747   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:45.828753   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:45.828807   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:45.868813   69333 cri.go:89] found id: ""
	I0927 01:44:45.868840   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.868848   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:45.868853   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:45.868905   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:45.907281   69333 cri.go:89] found id: ""
	I0927 01:44:45.907327   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.907341   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:45.907352   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:45.907371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:45.958539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:45.958574   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:45.972540   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:45.972567   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:46.046083   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:46.046124   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:46.046141   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:46.124313   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:46.124349   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.021100   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.021435   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:45.307916   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.807187   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:49.809212   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:46.042143   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.541984   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:50.542678   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.673701   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:48.687673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:48.687744   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:48.722269   69333 cri.go:89] found id: ""
	I0927 01:44:48.722291   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.722302   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:48.722308   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:48.722370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:48.758297   69333 cri.go:89] found id: ""
	I0927 01:44:48.758318   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.758326   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:48.758331   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:48.758377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:48.792706   69333 cri.go:89] found id: ""
	I0927 01:44:48.792730   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.792738   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:48.792744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:48.792792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:48.827015   69333 cri.go:89] found id: ""
	I0927 01:44:48.827035   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.827047   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:48.827052   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:48.827095   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:48.862538   69333 cri.go:89] found id: ""
	I0927 01:44:48.862564   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.862572   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:48.862577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:48.862632   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:48.896118   69333 cri.go:89] found id: ""
	I0927 01:44:48.896144   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.896154   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:48.896166   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:48.896225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:48.932483   69333 cri.go:89] found id: ""
	I0927 01:44:48.932511   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.932519   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:48.932524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:48.932576   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:48.971864   69333 cri.go:89] found id: ""
	I0927 01:44:48.971890   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.971898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:48.971906   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:48.971919   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:49.028163   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:49.028199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:49.042780   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:49.042805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:49.116454   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:49.116476   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:49.116491   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.196048   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:49.196084   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:51.735108   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:51.749191   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:51.749258   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:51.784776   69333 cri.go:89] found id: ""
	I0927 01:44:51.784804   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.784815   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:51.784823   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:51.784880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:51.822807   69333 cri.go:89] found id: ""
	I0927 01:44:51.822836   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.822847   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:51.822854   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:51.822912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:51.858700   69333 cri.go:89] found id: ""
	I0927 01:44:51.858726   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.858737   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:51.858744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:51.858812   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:51.894945   69333 cri.go:89] found id: ""
	I0927 01:44:51.894968   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.894975   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:51.894980   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:51.895025   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:51.939475   69333 cri.go:89] found id: ""
	I0927 01:44:51.939503   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.939518   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:51.939524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:51.939569   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:51.982626   69333 cri.go:89] found id: ""
	I0927 01:44:51.982654   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.982665   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:51.982673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:51.982731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:52.050446   69333 cri.go:89] found id: ""
	I0927 01:44:52.050473   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.050483   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:52.050490   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:52.050549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:52.092637   69333 cri.go:89] found id: ""
	I0927 01:44:52.092666   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.092676   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:52.092686   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:52.092700   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:52.132135   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:52.132165   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:52.186537   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:52.186572   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:52.200001   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:52.200027   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:52.282068   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:52.282093   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:52.282108   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.021229   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.308560   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.309001   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:53.042624   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:55.043212   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.866565   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:54.880400   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:54.880460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:54.918963   69333 cri.go:89] found id: ""
	I0927 01:44:54.919004   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.919027   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:54.919036   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:54.919107   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:54.959918   69333 cri.go:89] found id: ""
	I0927 01:44:54.959947   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.959958   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:54.959965   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:54.960026   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:55.004348   69333 cri.go:89] found id: ""
	I0927 01:44:55.004370   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.004378   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:55.004392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:55.004446   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:55.045190   69333 cri.go:89] found id: ""
	I0927 01:44:55.045213   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.045220   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:55.045225   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:55.045278   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:55.087638   69333 cri.go:89] found id: ""
	I0927 01:44:55.087663   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.087671   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:55.087677   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:55.087739   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:55.126899   69333 cri.go:89] found id: ""
	I0927 01:44:55.126932   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.126943   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:55.126951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:55.127012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:55.167593   69333 cri.go:89] found id: ""
	I0927 01:44:55.167624   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.167635   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:55.167643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:55.167706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:55.208362   69333 cri.go:89] found id: ""
	I0927 01:44:55.208388   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.208399   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:55.208409   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:55.208424   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:55.247198   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:55.247221   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:55.299408   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:55.299443   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:55.315745   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:55.315775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:55.387499   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:55.387523   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:55.387539   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:54.021502   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.520627   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.807487   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:58.807902   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.541517   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:59.542233   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.968863   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:57.987921   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:57.987988   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:58.036770   69333 cri.go:89] found id: ""
	I0927 01:44:58.036802   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.036813   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:58.036824   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:58.036878   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:58.072461   69333 cri.go:89] found id: ""
	I0927 01:44:58.072484   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.072492   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:58.072499   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:58.072551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:58.107247   69333 cri.go:89] found id: ""
	I0927 01:44:58.107273   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.107284   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:58.107290   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:58.107365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:58.149050   69333 cri.go:89] found id: ""
	I0927 01:44:58.149080   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.149091   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:58.149099   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:58.149162   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:58.188167   69333 cri.go:89] found id: ""
	I0927 01:44:58.188198   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.188209   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:58.188217   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:58.188283   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:58.224291   69333 cri.go:89] found id: ""
	I0927 01:44:58.224319   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.224329   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:58.224337   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:58.224401   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:58.258786   69333 cri.go:89] found id: ""
	I0927 01:44:58.258813   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.258822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:58.258828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:58.258885   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:58.298310   69333 cri.go:89] found id: ""
	I0927 01:44:58.298338   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.298349   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:58.298359   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:58.298373   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:58.340299   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:58.340330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.395097   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:58.395130   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:58.410653   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:58.410677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:58.479437   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:58.479459   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:58.479470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.057473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:01.071746   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:01.071818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:01.112652   69333 cri.go:89] found id: ""
	I0927 01:45:01.112676   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.112684   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:01.112690   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:01.112735   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:01.146071   69333 cri.go:89] found id: ""
	I0927 01:45:01.146100   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.146111   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:01.146119   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:01.146188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:01.188640   69333 cri.go:89] found id: ""
	I0927 01:45:01.188663   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.188673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:01.188679   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:01.188743   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:01.225024   69333 cri.go:89] found id: ""
	I0927 01:45:01.225050   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.225060   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:01.225067   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:01.225128   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:01.262459   69333 cri.go:89] found id: ""
	I0927 01:45:01.262487   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.262498   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:01.262505   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:01.262560   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:01.298567   69333 cri.go:89] found id: ""
	I0927 01:45:01.298588   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.298597   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:01.298603   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:01.298647   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:01.335051   69333 cri.go:89] found id: ""
	I0927 01:45:01.335084   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.335094   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:01.335100   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:01.335149   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:01.371187   69333 cri.go:89] found id: ""
	I0927 01:45:01.371217   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.371227   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:01.371237   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:01.371252   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:01.385163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:01.385189   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:01.457256   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:01.457298   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:01.457313   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.537788   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:01.537819   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:01.580645   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:01.580672   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.521367   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.020826   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.021213   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:00.808021   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.307242   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.542831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.042010   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.131877   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:04.145175   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:04.145248   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:04.179508   69333 cri.go:89] found id: ""
	I0927 01:45:04.179535   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.179545   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:04.179552   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:04.179612   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:04.213497   69333 cri.go:89] found id: ""
	I0927 01:45:04.213533   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.213544   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:04.213551   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:04.213606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:04.249708   69333 cri.go:89] found id: ""
	I0927 01:45:04.249737   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.249747   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:04.249754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:04.249824   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:04.288283   69333 cri.go:89] found id: ""
	I0927 01:45:04.288306   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.288314   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:04.288319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:04.288368   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:04.325515   69333 cri.go:89] found id: ""
	I0927 01:45:04.325539   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.325549   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:04.325560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:04.325618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:04.363485   69333 cri.go:89] found id: ""
	I0927 01:45:04.363511   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.363521   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:04.363528   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:04.363586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:04.398834   69333 cri.go:89] found id: ""
	I0927 01:45:04.398863   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.398875   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:04.398882   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:04.398948   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:04.433408   69333 cri.go:89] found id: ""
	I0927 01:45:04.433435   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.433443   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:04.433451   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:04.433461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:04.485354   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:04.485392   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:04.499007   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:04.499031   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:04.569376   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:04.569405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:04.569420   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:04.646614   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:04.646651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.186491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:07.200510   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:07.200575   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:07.239519   69333 cri.go:89] found id: ""
	I0927 01:45:07.239542   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.239553   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:07.239562   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:07.239751   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:07.276820   69333 cri.go:89] found id: ""
	I0927 01:45:07.276854   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.276863   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:07.276870   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:07.276932   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:07.312580   69333 cri.go:89] found id: ""
	I0927 01:45:07.312604   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.312613   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:07.312619   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:07.312676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:05.520930   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.020001   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:05.807739   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.807914   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:06.042390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.542149   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.542438   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.350763   69333 cri.go:89] found id: ""
	I0927 01:45:07.350788   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.350799   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:07.350806   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:07.350861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:07.385347   69333 cri.go:89] found id: ""
	I0927 01:45:07.385376   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.385383   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:07.385389   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:07.385439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:07.420665   69333 cri.go:89] found id: ""
	I0927 01:45:07.420696   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.420708   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:07.420718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:07.420768   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:07.453707   69333 cri.go:89] found id: ""
	I0927 01:45:07.453737   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.453746   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:07.453752   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:07.453806   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:07.489467   69333 cri.go:89] found id: ""
	I0927 01:45:07.489497   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.489508   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:07.489520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:07.489531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:07.569464   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:07.569496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.609123   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:07.609160   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:07.659556   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:07.659590   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:07.673163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:07.673191   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:07.751340   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.252511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:10.266651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:10.266706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:10.304131   69333 cri.go:89] found id: ""
	I0927 01:45:10.304160   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.304171   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:10.304178   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:10.304243   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:10.339267   69333 cri.go:89] found id: ""
	I0927 01:45:10.339295   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.339321   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:10.339329   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:10.339397   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:10.376268   69333 cri.go:89] found id: ""
	I0927 01:45:10.376298   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.376308   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:10.376319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:10.376380   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:10.413944   69333 cri.go:89] found id: ""
	I0927 01:45:10.413970   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.413978   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:10.413984   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:10.414033   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:10.449205   69333 cri.go:89] found id: ""
	I0927 01:45:10.449226   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.449234   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:10.449240   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:10.449289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:10.487927   69333 cri.go:89] found id: ""
	I0927 01:45:10.487947   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.487955   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:10.487961   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:10.488018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:10.525062   69333 cri.go:89] found id: ""
	I0927 01:45:10.525085   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.525095   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:10.525102   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:10.525163   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:10.560718   69333 cri.go:89] found id: ""
	I0927 01:45:10.560768   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.560779   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:10.560790   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:10.560803   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:10.641755   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.641781   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:10.641796   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:10.719775   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:10.719807   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:10.761952   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:10.761978   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:10.815296   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:10.815330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:10.023849   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.520577   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.307967   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.807872   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:14.808602   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:13.041469   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:15.036533   69234 pod_ready.go:82] duration metric: took 4m0.000873058s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	E0927 01:45:15.036568   69234 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:45:15.036588   69234 pod_ready.go:39] duration metric: took 4m6.530278971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:15.036645   69234 kubeadm.go:597] duration metric: took 4m16.375010355s to restartPrimaryControlPlane
	W0927 01:45:15.036713   69234 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:15.036743   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:13.330300   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:13.343840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:13.343893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:13.378904   69333 cri.go:89] found id: ""
	I0927 01:45:13.378933   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.378944   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:13.378952   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:13.379010   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:13.417375   69333 cri.go:89] found id: ""
	I0927 01:45:13.417403   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.417415   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:13.417422   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:13.417482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:13.456265   69333 cri.go:89] found id: ""
	I0927 01:45:13.456291   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.456302   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:13.456310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:13.456358   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:13.502205   69333 cri.go:89] found id: ""
	I0927 01:45:13.502229   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.502240   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:13.502247   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:13.502310   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:13.543617   69333 cri.go:89] found id: ""
	I0927 01:45:13.543642   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.543652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:13.543660   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:13.543723   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:13.580268   69333 cri.go:89] found id: ""
	I0927 01:45:13.580295   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.580305   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:13.580313   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:13.580374   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:13.616681   69333 cri.go:89] found id: ""
	I0927 01:45:13.616705   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.616713   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:13.616718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:13.616765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:13.653389   69333 cri.go:89] found id: ""
	I0927 01:45:13.653412   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.653420   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:13.653430   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:13.653442   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:13.666511   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:13.666534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:13.742282   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:13.742300   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:13.742311   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:13.825800   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:13.825836   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:13.876345   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:13.876376   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.429245   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:16.443286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:16.443366   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:16.481601   69333 cri.go:89] found id: ""
	I0927 01:45:16.481626   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.481637   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:16.481645   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:16.481703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:16.513626   69333 cri.go:89] found id: ""
	I0927 01:45:16.513652   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.513659   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:16.513665   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:16.513710   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:16.552531   69333 cri.go:89] found id: ""
	I0927 01:45:16.552565   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.552574   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:16.552580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:16.552636   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:16.587252   69333 cri.go:89] found id: ""
	I0927 01:45:16.587282   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.587294   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:16.587316   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:16.587377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:16.628376   69333 cri.go:89] found id: ""
	I0927 01:45:16.628401   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.628410   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:16.628417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:16.628482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:16.669603   69333 cri.go:89] found id: ""
	I0927 01:45:16.669639   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.669651   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:16.669658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:16.669731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:16.705581   69333 cri.go:89] found id: ""
	I0927 01:45:16.705607   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.705618   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:16.705626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:16.705682   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:16.740710   69333 cri.go:89] found id: ""
	I0927 01:45:16.740735   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.740743   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:16.740759   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:16.740771   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.791025   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:16.791060   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:16.805990   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:16.806023   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:16.878313   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:16.878331   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:16.878346   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:16.966228   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:16.966269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:14.521852   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:16.522127   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:17.307853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.308018   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.512044   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:19.526801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:19.526862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:19.562063   69333 cri.go:89] found id: ""
	I0927 01:45:19.562089   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.562098   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:19.562104   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:19.562159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:19.598600   69333 cri.go:89] found id: ""
	I0927 01:45:19.598626   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.598634   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:19.598642   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:19.598712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:19.632544   69333 cri.go:89] found id: ""
	I0927 01:45:19.632564   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.632572   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:19.632577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:19.632635   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:19.671676   69333 cri.go:89] found id: ""
	I0927 01:45:19.671703   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.671713   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:19.671721   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:19.671779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:19.710321   69333 cri.go:89] found id: ""
	I0927 01:45:19.710351   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.710362   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:19.710370   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:19.710438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:19.746252   69333 cri.go:89] found id: ""
	I0927 01:45:19.746277   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.746288   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:19.746295   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:19.746354   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:19.783089   69333 cri.go:89] found id: ""
	I0927 01:45:19.783112   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.783121   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:19.783126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:19.783189   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:19.821090   69333 cri.go:89] found id: ""
	I0927 01:45:19.821117   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.821126   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:19.821134   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:19.821145   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:19.873539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:19.873575   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:19.888446   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:19.888471   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:19.958009   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:19.958034   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:19.958050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:20.037552   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:20.037587   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:19.022216   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.520606   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.808178   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:23.808273   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:22.579288   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:22.592789   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:22.592846   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:22.628148   69333 cri.go:89] found id: ""
	I0927 01:45:22.628178   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.628186   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:22.628193   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:22.628240   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:22.664162   69333 cri.go:89] found id: ""
	I0927 01:45:22.664186   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.664194   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:22.664200   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:22.664253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:22.702077   69333 cri.go:89] found id: ""
	I0927 01:45:22.702104   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.702115   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:22.702123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:22.702183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:22.739657   69333 cri.go:89] found id: ""
	I0927 01:45:22.739690   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.739700   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:22.739708   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:22.739773   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:22.774109   69333 cri.go:89] found id: ""
	I0927 01:45:22.774137   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.774148   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:22.774174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:22.774229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:22.809648   69333 cri.go:89] found id: ""
	I0927 01:45:22.809671   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.809678   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:22.809684   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:22.809729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:22.842598   69333 cri.go:89] found id: ""
	I0927 01:45:22.842620   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.842627   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:22.842632   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:22.842677   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:22.877336   69333 cri.go:89] found id: ""
	I0927 01:45:22.877364   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.877374   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:22.877382   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:22.877393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:22.930364   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:22.930395   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:22.944174   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:22.944200   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:23.025495   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:23.025520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:23.025534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:23.101813   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:23.101850   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:25.644577   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:25.657820   69333 kubeadm.go:597] duration metric: took 4m3.277962916s to restartPrimaryControlPlane
	W0927 01:45:25.657898   69333 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:25.657929   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:26.111439   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:26.128279   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:26.138354   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:26.148116   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:26.148132   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:26.148170   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:26.157965   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:26.158012   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:26.168349   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:26.177624   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:26.177692   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:26.187584   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.196800   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:26.196856   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.205894   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:26.215316   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:26.215365   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:26.224989   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:26.299149   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:45:26.299261   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:26.451113   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:26.451282   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:26.451457   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:45:26.637960   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:26.640682   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:26.640782   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:26.640865   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:26.640972   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:26.641099   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:26.641233   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:26.641317   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:26.641425   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:26.641525   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:26.641633   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:26.641901   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:26.642000   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:26.642080   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:26.782585   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:27.008743   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:27.103701   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:27.217999   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:27.238810   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:27.240191   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:27.240240   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:27.375215   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:23.521301   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.020002   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.021215   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.306744   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.308577   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:27.376992   69333 out.go:235]   - Booting up control plane ...
	I0927 01:45:27.377123   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:27.386897   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:27.387959   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:27.388954   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:27.392182   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:45:30.520717   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.019981   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:30.808251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.307139   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.020640   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.520220   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.307871   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.808604   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:41.262067   69234 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225299595s)
	I0927 01:45:41.262142   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:41.294256   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:41.304403   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:41.314288   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:41.314310   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:41.314357   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:41.323280   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:41.323335   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:41.332637   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:41.341492   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:41.341552   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:41.352259   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.361190   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:41.361244   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.370863   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:41.379674   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:41.379735   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:41.389169   69234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:41.434391   69234 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:45:41.434565   69234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:41.537712   69234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:41.537813   69234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:41.537951   69234 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:45:41.546906   69234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:41.548799   69234 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:41.548882   69234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:41.548959   69234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:41.549049   69234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:41.549133   69234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:41.549239   69234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:41.549328   69234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:41.549433   69234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:41.549531   69234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:41.549619   69234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:41.549691   69234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:41.549741   69234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:41.549813   69234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:41.594579   69234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:41.703970   69234 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:45:41.813013   69234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:41.875564   69234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:42.025627   69234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:42.026325   69234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:42.028784   69234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:39.521118   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.020563   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:40.307764   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.307974   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:44.808238   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.030464   69234 out.go:235]   - Booting up control plane ...
	I0927 01:45:42.030566   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:42.030674   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:42.031152   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:42.050207   69234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:42.058709   69234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:42.058766   69234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:42.192498   69234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:45:42.192628   69234 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:45:42.694670   69234 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.189114ms
	I0927 01:45:42.694812   69234 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:45:48.195975   69234 kubeadm.go:310] [api-check] The API server is healthy after 5.501110293s
	I0927 01:45:48.210406   69234 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:45:48.231678   69234 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:45:48.257669   69234 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:45:48.257859   69234 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-245911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:45:48.271429   69234 kubeadm.go:310] [bootstrap-token] Using token: bqds0t.3lt1vhl3zjbrkom6
	I0927 01:45:44.021019   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:46.520158   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:48.272667   69234 out.go:235]   - Configuring RBAC rules ...
	I0927 01:45:48.272775   69234 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:45:48.278773   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:45:48.290868   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:45:48.297879   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:45:48.302011   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:45:48.306217   69234 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:45:48.604161   69234 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:45:49.041505   69234 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:45:49.604127   69234 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:45:49.604867   69234 kubeadm.go:310] 
	I0927 01:45:49.604981   69234 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:45:49.605008   69234 kubeadm.go:310] 
	I0927 01:45:49.605136   69234 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:45:49.605147   69234 kubeadm.go:310] 
	I0927 01:45:49.605188   69234 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:45:49.605266   69234 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:45:49.605363   69234 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:45:49.605373   69234 kubeadm.go:310] 
	I0927 01:45:49.605446   69234 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:45:49.605455   69234 kubeadm.go:310] 
	I0927 01:45:49.605524   69234 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:45:49.605537   69234 kubeadm.go:310] 
	I0927 01:45:49.605612   69234 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:45:49.605725   69234 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:45:49.605826   69234 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:45:49.605836   69234 kubeadm.go:310] 
	I0927 01:45:49.605913   69234 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:45:49.606010   69234 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:45:49.606032   69234 kubeadm.go:310] 
	I0927 01:45:49.606130   69234 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606252   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:45:49.606276   69234 kubeadm.go:310] 	--control-plane 
	I0927 01:45:49.606282   69234 kubeadm.go:310] 
	I0927 01:45:49.606404   69234 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:45:49.606421   69234 kubeadm.go:310] 
	I0927 01:45:49.606546   69234 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606692   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:45:49.607952   69234 kubeadm.go:310] W0927 01:45:41.410128    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608322   69234 kubeadm.go:310] W0927 01:45:41.412009    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608494   69234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:45:49.608518   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:45:49.608527   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:45:49.610175   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:45:47.307006   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.307374   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.611562   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:45:49.622683   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:45:49.642326   69234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:45:49.642366   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:49.642393   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-245911 minikube.k8s.io/updated_at=2024_09_27T01_45_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=embed-certs-245911 minikube.k8s.io/primary=true
	I0927 01:45:49.677602   69234 ops.go:34] apiserver oom_adj: -16
	I0927 01:45:49.854320   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:50.355392   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:48.520718   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.520908   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.020638   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.854364   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.355074   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.855077   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.354509   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.855229   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.355204   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.854829   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:54.066909   69234 kubeadm.go:1113] duration metric: took 4.424595735s to wait for elevateKubeSystemPrivileges
	I0927 01:45:54.066954   69234 kubeadm.go:394] duration metric: took 4m55.454404762s to StartCluster
	I0927 01:45:54.066978   69234 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.067071   69234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:45:54.069732   69234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.070048   69234 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:45:54.070126   69234 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:45:54.070235   69234 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-245911"
	I0927 01:45:54.070257   69234 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-245911"
	I0927 01:45:54.070261   69234 addons.go:69] Setting default-storageclass=true in profile "embed-certs-245911"
	I0927 01:45:54.070270   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:45:54.070270   69234 addons.go:69] Setting metrics-server=true in profile "embed-certs-245911"
	I0927 01:45:54.070286   69234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-245911"
	I0927 01:45:54.070296   69234 addons.go:234] Setting addon metrics-server=true in "embed-certs-245911"
	W0927 01:45:54.070305   69234 addons.go:243] addon metrics-server should already be in state true
	W0927 01:45:54.070266   69234 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070750   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070790   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070753   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070850   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070889   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070936   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.071693   69234 out.go:177] * Verifying Kubernetes components...
	I0927 01:45:54.073034   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:45:54.087559   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38159
	I0927 01:45:54.087567   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0927 01:45:54.088061   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088074   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0927 01:45:54.088183   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088412   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088551   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088573   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088635   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088655   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088852   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088874   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088929   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089023   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.089193   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089585   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089610   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089627   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.089639   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.092683   69234 addons.go:234] Setting addon default-storageclass=true in "embed-certs-245911"
	W0927 01:45:54.092705   69234 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:45:54.092729   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.093065   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.093102   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.106496   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0927 01:45:54.106952   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.107486   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.107513   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.108098   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.108297   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.109993   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.110532   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0927 01:45:54.111066   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.111688   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.111708   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.111909   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0927 01:45:54.112156   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112338   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.112740   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.112751   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.112832   69234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:45:54.112953   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.113345   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.113372   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.114353   69234 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.114372   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:45:54.114392   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.114596   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.116175   69234 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:45:51.806801   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.808476   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:54.117315   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:45:54.117326   69234 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:45:54.117341   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.120242   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.120881   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.120903   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121161   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121224   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121452   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.121658   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.121747   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.121944   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121960   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.121677   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.122386   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.122518   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.122695   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.135920   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0927 01:45:54.136247   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.136682   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.136696   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.136971   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.137163   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.138640   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.138903   69234 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.138919   69234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:45:54.138936   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.141420   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141786   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.141803   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141966   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.142132   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.142235   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.142308   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.325790   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:45:54.375616   69234 node_ready.go:35] waiting up to 6m0s for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386626   69234 node_ready.go:49] node "embed-certs-245911" has status "Ready":"True"
	I0927 01:45:54.386646   69234 node_ready.go:38] duration metric: took 10.995073ms for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386654   69234 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:54.394605   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:45:54.458245   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.501624   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:45:54.501655   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:45:54.508690   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.548168   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:45:54.548194   69234 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:45:54.615565   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:54.615591   69234 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:45:54.655649   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:55.488749   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488849   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.488803   69234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030519069s)
	I0927 01:45:55.488934   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488942   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489266   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489282   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489290   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489298   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489377   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489393   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489401   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489409   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489511   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489528   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489540   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491047   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491082   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.491093   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.535220   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.535240   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.535604   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.535625   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.627642   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.627663   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628020   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.628025   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628047   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628055   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.628062   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628294   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628311   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628322   69234 addons.go:475] Verifying addon metrics-server=true in "embed-certs-245911"
	I0927 01:45:55.629802   69234 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0927 01:45:55.022054   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:57.520749   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:56.307903   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.807972   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:55.631245   69234 addons.go:510] duration metric: took 1.561128577s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0927 01:45:56.401813   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.900688   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:59.521353   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.014813   69534 pod_ready.go:82] duration metric: took 4m0.000584515s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:00.014858   69534 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:46:00.014878   69534 pod_ready.go:39] duration metric: took 4m13.043107791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:00.014903   69534 kubeadm.go:597] duration metric: took 4m20.409702758s to restartPrimaryControlPlane
	W0927 01:46:00.014956   69534 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:46:00.014980   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:46:00.808408   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.808672   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.901714   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.902242   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:03.401910   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.401936   69234 pod_ready.go:82] duration metric: took 9.007296678s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.401948   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908874   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.908896   69234 pod_ready.go:82] duration metric: took 506.941437ms for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908918   69234 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914117   69234 pod_ready.go:93] pod "etcd-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.914135   69234 pod_ready.go:82] duration metric: took 5.210078ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914142   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918778   69234 pod_ready.go:93] pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.918801   69234 pod_ready.go:82] duration metric: took 4.651828ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918812   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.923979   69234 pod_ready.go:93] pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.923996   69234 pod_ready.go:82] duration metric: took 5.176348ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.924004   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199586   69234 pod_ready.go:93] pod "kube-proxy-5l299" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.199612   69234 pod_ready.go:82] duration metric: took 275.601068ms for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199621   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598852   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.598880   69234 pod_ready.go:82] duration metric: took 399.251298ms for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598890   69234 pod_ready.go:39] duration metric: took 10.212226661s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:04.598905   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:04.598962   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:04.615194   69234 api_server.go:72] duration metric: took 10.545103977s to wait for apiserver process to appear ...
	I0927 01:46:04.615225   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:04.615248   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:46:04.621164   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:46:04.622001   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:04.622022   69234 api_server.go:131] duration metric: took 6.789717ms to wait for apiserver health ...
	I0927 01:46:04.622032   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:04.802641   69234 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:04.802674   69234 system_pods.go:61] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:04.802681   69234 system_pods.go:61] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:04.802687   69234 system_pods.go:61] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:04.802693   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:04.802699   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:04.802703   69234 system_pods.go:61] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:04.802708   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:04.802717   69234 system_pods.go:61] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:04.802722   69234 system_pods.go:61] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:04.802735   69234 system_pods.go:74] duration metric: took 180.694209ms to wait for pod list to return data ...
	I0927 01:46:04.802747   69234 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:04.999578   69234 default_sa.go:45] found service account: "default"
	I0927 01:46:04.999603   69234 default_sa.go:55] duration metric: took 196.845725ms for default service account to be created ...
	I0927 01:46:04.999612   69234 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:05.201201   69234 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:05.201228   69234 system_pods.go:89] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:05.201233   69234 system_pods.go:89] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:05.201237   69234 system_pods.go:89] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:05.201241   69234 system_pods.go:89] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:05.201244   69234 system_pods.go:89] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:05.201248   69234 system_pods.go:89] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:05.201251   69234 system_pods.go:89] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:05.201256   69234 system_pods.go:89] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:05.201260   69234 system_pods.go:89] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:05.201268   69234 system_pods.go:126] duration metric: took 201.651734ms to wait for k8s-apps to be running ...
	I0927 01:46:05.201275   69234 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:05.201315   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:05.216216   69234 system_svc.go:56] duration metric: took 14.930697ms WaitForService to wait for kubelet
	I0927 01:46:05.216248   69234 kubeadm.go:582] duration metric: took 11.146166369s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:05.216271   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:05.400667   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:05.400695   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:05.400708   69234 node_conditions.go:105] duration metric: took 184.432904ms to run NodePressure ...
	I0927 01:46:05.400719   69234 start.go:241] waiting for startup goroutines ...
	I0927 01:46:05.400729   69234 start.go:246] waiting for cluster config update ...
	I0927 01:46:05.400743   69234 start.go:255] writing updated cluster config ...
	I0927 01:46:05.401134   69234 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:05.452606   69234 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:05.454631   69234 out.go:177] * Done! kubectl is now configured to use "embed-certs-245911" cluster and "default" namespace by default
	I0927 01:46:05.307371   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.807981   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.393548   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:46:07.394304   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:07.394505   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:10.307311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.308085   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:14.308664   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.395176   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:12.395434   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:16.807116   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:18.807652   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:21.307348   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:23.807597   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:26.304067   69534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.289064717s)
	I0927 01:46:26.304150   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:26.341383   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:46:26.365985   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:46:26.382056   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:46:26.382082   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:46:26.382133   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:46:26.405820   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:46:26.405881   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:46:26.416355   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:46:26.426710   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:46:26.426759   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:46:26.438110   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.448631   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:46:26.448691   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.458453   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:46:26.467677   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:46:26.467724   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:46:26.478333   69534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:46:26.528377   69534 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:46:26.528432   69534 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:46:26.653799   69534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:46:26.653904   69534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:46:26.654029   69534 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:46:26.666791   69534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:46:22.395858   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:22.396073   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:26.668660   69534 out.go:235]   - Generating certificates and keys ...
	I0927 01:46:26.668739   69534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:46:26.668803   69534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:46:26.668918   69534 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:46:26.669012   69534 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:46:26.669103   69534 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:46:26.669178   69534 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:46:26.669308   69534 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:46:26.669628   69534 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:46:26.669868   69534 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:46:26.670086   69534 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:46:26.670284   69534 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:46:26.670395   69534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:46:26.885345   69534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:46:27.061416   69534 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:46:27.347409   69534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:46:27.477340   69534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:46:27.607326   69534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:46:27.607882   69534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:46:27.612459   69534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:46:27.614167   69534 out.go:235]   - Booting up control plane ...
	I0927 01:46:27.614285   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:46:27.614388   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:46:27.614482   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:46:27.635734   69534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:46:27.642550   69534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:46:27.642634   69534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:46:27.778616   69534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:46:27.778763   69534 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:46:28.280057   69534 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.328597ms
	I0927 01:46:28.280185   69534 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:46:25.808311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:28.307033   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:33.781107   69534 kubeadm.go:310] [api-check] The API server is healthy after 5.501552407s
	I0927 01:46:33.796672   69534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:46:33.809900   69534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:46:33.845968   69534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:46:33.846194   69534 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-368295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:46:33.862294   69534 kubeadm.go:310] [bootstrap-token] Using token: qmzafx.lhyo0l65zryygr2x
	I0927 01:46:30.308436   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809032   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809057   68676 pod_ready.go:82] duration metric: took 4m0.007962887s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:32.809066   68676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:46:32.809075   68676 pod_ready.go:39] duration metric: took 4m5.043455674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:32.809088   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:32.809115   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:32.809175   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:32.871610   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:32.871629   68676 cri.go:89] found id: ""
	I0927 01:46:32.871636   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:32.871682   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.878223   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:32.878296   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:32.925139   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:32.925173   68676 cri.go:89] found id: ""
	I0927 01:46:32.925182   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:32.925238   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.929961   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:32.930023   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:32.969777   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:32.969799   68676 cri.go:89] found id: ""
	I0927 01:46:32.969807   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:32.969854   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.979003   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:32.979088   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:33.029458   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:33.029532   68676 cri.go:89] found id: ""
	I0927 01:46:33.029546   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:33.029609   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.036703   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:33.036777   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:33.085041   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:33.085058   68676 cri.go:89] found id: ""
	I0927 01:46:33.085065   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:33.085125   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.090305   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:33.090372   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:33.136837   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.136857   68676 cri.go:89] found id: ""
	I0927 01:46:33.136865   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:33.136913   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.141483   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:33.141543   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:33.182913   68676 cri.go:89] found id: ""
	I0927 01:46:33.182939   68676 logs.go:276] 0 containers: []
	W0927 01:46:33.182950   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:33.182956   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:33.183002   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:33.237031   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:33.237055   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.237061   68676 cri.go:89] found id: ""
	I0927 01:46:33.237070   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:33.237121   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.241969   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.246733   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:33.246760   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:33.294096   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:33.294128   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.357981   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:33.358029   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.397465   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:33.397500   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:33.922831   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:33.922869   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:34.067117   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:34.067152   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:34.082191   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:34.082218   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:34.126416   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:34.126454   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:34.166714   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:34.166744   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:34.206601   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:34.206642   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:34.254352   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:34.254383   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:34.293318   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:34.293347   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:34.340365   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:34.340398   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:33.863782   69534 out.go:235]   - Configuring RBAC rules ...
	I0927 01:46:33.863922   69534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:46:33.871841   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:46:33.880047   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:46:33.884688   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:46:33.892057   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:46:33.895787   69534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:46:34.190553   69534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:46:34.619922   69534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:46:35.188452   69534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:46:35.189552   69534 kubeadm.go:310] 
	I0927 01:46:35.189661   69534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:46:35.189683   69534 kubeadm.go:310] 
	I0927 01:46:35.189791   69534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:46:35.189806   69534 kubeadm.go:310] 
	I0927 01:46:35.189845   69534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:46:35.189925   69534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:46:35.190002   69534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:46:35.190016   69534 kubeadm.go:310] 
	I0927 01:46:35.190095   69534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:46:35.190104   69534 kubeadm.go:310] 
	I0927 01:46:35.190181   69534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:46:35.190193   69534 kubeadm.go:310] 
	I0927 01:46:35.190264   69534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:46:35.190387   69534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:46:35.190484   69534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:46:35.190498   69534 kubeadm.go:310] 
	I0927 01:46:35.190593   69534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:46:35.190681   69534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:46:35.190691   69534 kubeadm.go:310] 
	I0927 01:46:35.190793   69534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.190948   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:46:35.191002   69534 kubeadm.go:310] 	--control-plane 
	I0927 01:46:35.191021   69534 kubeadm.go:310] 
	I0927 01:46:35.191134   69534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:46:35.191155   69534 kubeadm.go:310] 
	I0927 01:46:35.191281   69534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.191427   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:46:35.192564   69534 kubeadm.go:310] W0927 01:46:26.480521    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.192905   69534 kubeadm.go:310] W0927 01:46:26.481198    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.193078   69534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:46:35.193093   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:46:35.193102   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:46:35.194656   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:46:35.195835   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:46:35.207162   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:46:35.225999   69534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-368295 minikube.k8s.io/updated_at=2024_09_27T01_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=default-k8s-diff-port-368295 minikube.k8s.io/primary=true
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.258203   69534 ops.go:34] apiserver oom_adj: -16
	I0927 01:46:35.425367   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.926435   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.425611   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.925505   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.426329   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.926184   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.425745   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.925572   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.425831   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.508783   69534 kubeadm.go:1113] duration metric: took 4.282764601s to wait for elevateKubeSystemPrivileges
	I0927 01:46:39.508817   69534 kubeadm.go:394] duration metric: took 4m59.95903234s to StartCluster
	I0927 01:46:39.508838   69534 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.508930   69534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:46:39.510771   69534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.511005   69534 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:46:39.511071   69534 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:46:39.511194   69534 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511214   69534 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511230   69534 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511261   69534 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511276   69534 addons.go:243] addon metrics-server should already be in state true
	I0927 01:46:39.511325   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511243   69534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-368295"
	I0927 01:46:39.511225   69534 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511515   69534 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:46:39.511538   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511223   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511818   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511844   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511877   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511905   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.513051   69534 out.go:177] * Verifying Kubernetes components...
	I0927 01:46:39.514530   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:46:39.528031   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0927 01:46:39.528033   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0927 01:46:39.528446   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528603   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528997   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529022   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529085   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529101   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529210   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0927 01:46:39.529421   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.529721   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.529743   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.529724   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.530304   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.530358   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.530308   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.530423   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.530762   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.531337   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.531389   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.533286   69534 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.533306   69534 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:46:39.533333   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.533656   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.533692   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.546657   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I0927 01:46:39.546881   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0927 01:46:39.547298   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547327   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547842   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547860   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.547860   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547876   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.548220   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548239   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.548481   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.550160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550445   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0927 01:46:39.550744   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.551173   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.551195   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.551525   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.552620   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.552652   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.552838   69534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:46:39.552916   69534 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:46:36.914500   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:36.932340   68676 api_server.go:72] duration metric: took 4m14.883408931s to wait for apiserver process to appear ...
	I0927 01:46:36.932368   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:36.932407   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:36.932465   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:36.967757   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:36.967780   68676 cri.go:89] found id: ""
	I0927 01:46:36.967787   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:36.967832   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:36.972025   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:36.972105   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:37.018403   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.018431   68676 cri.go:89] found id: ""
	I0927 01:46:37.018448   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:37.018515   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.022868   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:37.022925   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:37.062443   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.062466   68676 cri.go:89] found id: ""
	I0927 01:46:37.062474   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:37.062534   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.066617   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:37.066674   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:37.101462   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.101489   68676 cri.go:89] found id: ""
	I0927 01:46:37.101500   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:37.101557   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.105564   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:37.105620   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:37.143692   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.143719   68676 cri.go:89] found id: ""
	I0927 01:46:37.143729   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:37.143775   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.148405   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:37.148484   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:37.184914   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.184943   68676 cri.go:89] found id: ""
	I0927 01:46:37.184954   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:37.185013   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.189486   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:37.189553   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:37.235389   68676 cri.go:89] found id: ""
	I0927 01:46:37.235416   68676 logs.go:276] 0 containers: []
	W0927 01:46:37.235424   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:37.235429   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:37.235480   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:37.276239   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.276266   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.276272   68676 cri.go:89] found id: ""
	I0927 01:46:37.276282   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:37.276338   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.280381   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.284423   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:37.284440   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.319790   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:37.319816   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.358818   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:37.358843   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.398137   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:37.398168   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.458672   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:37.458720   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:37.476148   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:37.476184   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:37.604190   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:37.604223   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:37.652633   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:37.652671   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.701240   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:37.701273   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.739555   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:37.739583   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.781721   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:37.781750   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:38.209361   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:38.209399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:38.261628   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:38.261658   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:39.554328   69534 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:39.554342   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:46:39.554362   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.554446   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:46:39.554456   69534 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:46:39.554469   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.557886   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.557982   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558121   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558269   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558350   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558369   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558466   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558620   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558690   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.558740   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558797   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.559026   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.559136   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.569570   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0927 01:46:39.569981   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.570364   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.570383   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.570746   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.570890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.572537   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.572779   69534 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.572795   69534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:46:39.572815   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.575104   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.575435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575595   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.575751   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.575844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.575960   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.784965   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:46:39.820986   69534 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829323   69534 node_ready.go:49] node "default-k8s-diff-port-368295" has status "Ready":"True"
	I0927 01:46:39.829346   69534 node_ready.go:38] duration metric: took 8.333848ms for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829358   69534 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:39.836143   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:39.940697   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.955239   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:46:39.955264   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:46:40.076199   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:40.080720   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:46:40.080746   69534 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:46:40.182698   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.182720   69534 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:46:40.219231   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.431480   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.431859   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.431875   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.431875   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.431889   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431898   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.432126   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.432146   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.432189   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442440   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.442468   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.442761   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442785   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.442815   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.044597   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.044627   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.044964   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.045013   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045021   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.045033   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.045041   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.045254   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045267   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.427791   69534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.208520131s)
	I0927 01:46:41.427843   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.427859   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428175   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.428184   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428196   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428205   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.428213   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428477   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428490   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428500   69534 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-368295"
	I0927 01:46:41.430399   69534 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0927 01:46:41.431795   69534 addons.go:510] duration metric: took 1.920729429s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0927 01:46:41.844911   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:40.832698   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:46:40.838244   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:46:40.839252   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:40.839270   68676 api_server.go:131] duration metric: took 3.906895557s to wait for apiserver health ...
	I0927 01:46:40.839277   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:40.839312   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:40.839373   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:40.879726   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:40.879753   68676 cri.go:89] found id: ""
	I0927 01:46:40.879763   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:40.879822   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.884233   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:40.884301   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:40.936189   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:40.936216   68676 cri.go:89] found id: ""
	I0927 01:46:40.936226   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:40.936289   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.940805   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:40.940885   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:40.978662   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:40.978683   68676 cri.go:89] found id: ""
	I0927 01:46:40.978693   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:40.978757   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.983357   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:40.983428   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:41.027134   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.027160   68676 cri.go:89] found id: ""
	I0927 01:46:41.027170   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:41.027229   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.031909   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:41.031986   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:41.077539   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.077568   68676 cri.go:89] found id: ""
	I0927 01:46:41.077577   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:41.077638   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.082237   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:41.082314   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:41.122413   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:41.122437   68676 cri.go:89] found id: ""
	I0927 01:46:41.122446   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:41.122501   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.127807   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:41.127872   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:41.174287   68676 cri.go:89] found id: ""
	I0927 01:46:41.174320   68676 logs.go:276] 0 containers: []
	W0927 01:46:41.174331   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:41.174339   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:41.174397   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:41.213192   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.213219   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:41.213225   68676 cri.go:89] found id: ""
	I0927 01:46:41.213234   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:41.213298   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.218168   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.227165   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:41.227194   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.269538   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:41.269571   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:41.691900   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:41.691943   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:41.709639   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:41.709682   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:41.829334   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:41.829366   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:41.886517   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:41.886552   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.933012   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:41.933035   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.973881   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:41.973921   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:42.032592   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:42.032628   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:42.087817   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:42.087856   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:42.162770   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:42.162808   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:42.213367   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:42.213399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:42.254937   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:42.254963   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:44.804112   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:46:44.804146   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.804153   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.804158   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.804162   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.804167   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.804171   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.804180   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.804186   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.804196   68676 system_pods.go:74] duration metric: took 3.964911623s to wait for pod list to return data ...
	I0927 01:46:44.804208   68676 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:44.807883   68676 default_sa.go:45] found service account: "default"
	I0927 01:46:44.807907   68676 default_sa.go:55] duration metric: took 3.689984ms for default service account to be created ...
	I0927 01:46:44.807917   68676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:44.812135   68676 system_pods.go:86] 8 kube-system pods found
	I0927 01:46:44.812161   68676 system_pods.go:89] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.812167   68676 system_pods.go:89] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.812174   68676 system_pods.go:89] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.812178   68676 system_pods.go:89] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.812185   68676 system_pods.go:89] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.812190   68676 system_pods.go:89] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.812200   68676 system_pods.go:89] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.812209   68676 system_pods.go:89] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.812222   68676 system_pods.go:126] duration metric: took 4.297317ms to wait for k8s-apps to be running ...
	I0927 01:46:44.812234   68676 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:44.812282   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:44.827911   68676 system_svc.go:56] duration metric: took 15.668154ms WaitForService to wait for kubelet
	I0927 01:46:44.827946   68676 kubeadm.go:582] duration metric: took 4m22.779012486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:44.827964   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:44.830688   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:44.830707   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:44.830716   68676 node_conditions.go:105] duration metric: took 2.747178ms to run NodePressure ...
	I0927 01:46:44.830725   68676 start.go:241] waiting for startup goroutines ...
	I0927 01:46:44.830732   68676 start.go:246] waiting for cluster config update ...
	I0927 01:46:44.830742   68676 start.go:255] writing updated cluster config ...
	I0927 01:46:44.830990   68676 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:44.881491   68676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:44.884307   68676 out.go:177] * Done! kubectl is now configured to use "no-preload-521072" cluster and "default" namespace by default
	I0927 01:46:42.397038   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:42.397331   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:43.845539   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:46.343584   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:48.842505   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.842527   69534 pod_ready.go:82] duration metric: took 9.006354643s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.842537   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846753   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.846771   69534 pod_ready.go:82] duration metric: took 4.228349ms for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846780   69534 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851234   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.851256   69534 pod_ready.go:82] duration metric: took 4.468727ms for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851265   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855648   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.855669   69534 pod_ready.go:82] duration metric: took 4.398439ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855678   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860882   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.860898   69534 pod_ready.go:82] duration metric: took 5.214278ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860906   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241149   69534 pod_ready.go:93] pod "kube-proxy-kqjdq" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.241180   69534 pod_ready.go:82] duration metric: took 380.266777ms for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241192   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642403   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.642437   69534 pod_ready.go:82] duration metric: took 401.235952ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642448   69534 pod_ready.go:39] duration metric: took 9.813073515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:49.642465   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:49.642518   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:49.658847   69534 api_server.go:72] duration metric: took 10.147811957s to wait for apiserver process to appear ...
	I0927 01:46:49.658877   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:49.658898   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:46:49.665899   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:46:49.666844   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:49.666867   69534 api_server.go:131] duration metric: took 7.982491ms to wait for apiserver health ...
	I0927 01:46:49.666876   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:49.843377   69534 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:49.843402   69534 system_pods.go:61] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:49.843408   69534 system_pods.go:61] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:49.843413   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:49.843417   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:49.843420   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:49.843425   69534 system_pods.go:61] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:49.843429   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:49.843437   69534 system_pods.go:61] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:49.843443   69534 system_pods.go:61] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:49.843454   69534 system_pods.go:74] duration metric: took 176.572041ms to wait for pod list to return data ...
	I0927 01:46:49.843466   69534 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:50.041031   69534 default_sa.go:45] found service account: "default"
	I0927 01:46:50.041053   69534 default_sa.go:55] duration metric: took 197.577565ms for default service account to be created ...
	I0927 01:46:50.041062   69534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:50.243807   69534 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:50.243834   69534 system_pods.go:89] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:50.243840   69534 system_pods.go:89] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:50.243845   69534 system_pods.go:89] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:50.243849   69534 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:50.243853   69534 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:50.243856   69534 system_pods.go:89] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:50.243860   69534 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:50.243866   69534 system_pods.go:89] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:50.243869   69534 system_pods.go:89] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:50.243879   69534 system_pods.go:126] duration metric: took 202.812704ms to wait for k8s-apps to be running ...
	I0927 01:46:50.243888   69534 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:50.243931   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:50.260175   69534 system_svc.go:56] duration metric: took 16.279433ms WaitForService to wait for kubelet
	I0927 01:46:50.260203   69534 kubeadm.go:582] duration metric: took 10.749173466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:50.260220   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:50.441020   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:50.441044   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:50.441052   69534 node_conditions.go:105] duration metric: took 180.827321ms to run NodePressure ...
	I0927 01:46:50.441062   69534 start.go:241] waiting for startup goroutines ...
	I0927 01:46:50.441081   69534 start.go:246] waiting for cluster config update ...
	I0927 01:46:50.441091   69534 start.go:255] writing updated cluster config ...
	I0927 01:46:50.441338   69534 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:50.492229   69534 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:50.494198   69534 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-368295" cluster and "default" namespace by default
	I0927 01:47:22.398756   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:47:22.399035   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:47:22.399051   69333 kubeadm.go:310] 
	I0927 01:47:22.399125   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:47:22.399167   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:47:22.399176   69333 kubeadm.go:310] 
	I0927 01:47:22.399242   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:47:22.399326   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:47:22.399452   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:47:22.399464   69333 kubeadm.go:310] 
	I0927 01:47:22.399627   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:47:22.399702   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:47:22.399750   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:47:22.399763   69333 kubeadm.go:310] 
	I0927 01:47:22.399908   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:47:22.400001   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:47:22.400014   69333 kubeadm.go:310] 
	I0927 01:47:22.400109   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:47:22.400218   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:47:22.400331   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:47:22.400406   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:47:22.400414   69333 kubeadm.go:310] 
	I0927 01:47:22.401157   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:47:22.401273   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:47:22.401342   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:47:22.401458   69333 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:47:22.401498   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:47:22.863316   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:47:22.878664   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:47:22.889118   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:47:22.889135   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:47:22.889173   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:47:22.898966   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:47:22.899035   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:47:22.911280   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:47:22.920628   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:47:22.920677   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:47:22.929860   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.938794   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:47:22.938839   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.947982   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:47:22.956785   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:47:22.956837   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:47:22.966186   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:47:23.039915   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:47:23.040017   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:47:23.189097   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:47:23.189274   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:47:23.189395   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:47:23.400731   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:47:23.402659   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:47:23.402776   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:47:23.402855   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:47:23.402959   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:47:23.403040   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:47:23.403162   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:47:23.403349   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:47:23.403935   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:47:23.404260   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:47:23.404563   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:47:23.404896   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:47:23.405050   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:47:23.405121   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:47:23.466908   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:47:23.717009   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:47:23.766225   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:47:23.961488   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:47:23.987846   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:47:23.988724   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:47:23.988790   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:47:24.130550   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:47:24.132276   69333 out.go:235]   - Booting up control plane ...
	I0927 01:47:24.132386   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:47:24.146415   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:47:24.147664   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:47:24.148443   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:47:24.151623   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:48:04.153587   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:48:04.153934   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:04.154129   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:09.154634   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:09.154883   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:19.155638   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:19.155844   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:39.156224   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:39.156429   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155507   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:49:19.155754   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155779   69333 kubeadm.go:310] 
	I0927 01:49:19.155872   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:49:19.155947   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:49:19.155958   69333 kubeadm.go:310] 
	I0927 01:49:19.156026   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:49:19.156077   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:49:19.156230   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:49:19.156242   69333 kubeadm.go:310] 
	I0927 01:49:19.156379   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:49:19.156434   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:49:19.156486   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:49:19.156506   69333 kubeadm.go:310] 
	I0927 01:49:19.156628   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:49:19.156756   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:49:19.156775   69333 kubeadm.go:310] 
	I0927 01:49:19.156925   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:49:19.157022   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:49:19.157112   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:49:19.157191   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:49:19.157202   69333 kubeadm.go:310] 
	I0927 01:49:19.158023   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:49:19.158149   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:49:19.158277   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:49:19.158357   69333 kubeadm.go:394] duration metric: took 7m56.829434682s to StartCluster
	I0927 01:49:19.158404   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:49:19.158477   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:49:19.200705   69333 cri.go:89] found id: ""
	I0927 01:49:19.200729   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.200736   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:49:19.200742   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:49:19.200791   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:49:19.240252   69333 cri.go:89] found id: ""
	I0927 01:49:19.240274   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.240285   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:49:19.240292   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:49:19.240352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:49:19.275802   69333 cri.go:89] found id: ""
	I0927 01:49:19.275826   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.275834   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:49:19.275840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:49:19.275894   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:49:19.309317   69333 cri.go:89] found id: ""
	I0927 01:49:19.309342   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.309350   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:49:19.309357   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:49:19.309414   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:49:19.344778   69333 cri.go:89] found id: ""
	I0927 01:49:19.344806   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.344817   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:49:19.344823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:49:19.344882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:49:19.379394   69333 cri.go:89] found id: ""
	I0927 01:49:19.379426   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.379438   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:49:19.379445   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:49:19.379502   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:49:19.415349   69333 cri.go:89] found id: ""
	I0927 01:49:19.415376   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.415384   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:49:19.415390   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:49:19.415438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:49:19.453357   69333 cri.go:89] found id: ""
	I0927 01:49:19.453381   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.453389   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:49:19.453397   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:49:19.453409   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:49:19.530384   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:49:19.530405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:49:19.530423   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:49:19.643418   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:49:19.643453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:49:19.688825   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:49:19.688861   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:49:19.745945   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:49:19.745983   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0927 01:49:19.762685   69333 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:49:19.762739   69333 out.go:270] * 
	W0927 01:49:19.762791   69333 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.762804   69333 out.go:270] * 
	W0927 01:49:19.763605   69333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:49:19.767393   69333 out.go:201] 
	W0927 01:49:19.768622   69333 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.768671   69333 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:49:19.768690   69333 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:49:19.771036   69333 out.go:201] 
	
	
	==> CRI-O <==
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.060984818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402147060957876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60f22fc4-f385-40b7-984d-c0a309212487 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.061559675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cd6c463-9a6f-4d67-be03-92bfe63674ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.061611977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cd6c463-9a6f-4d67-be03-92bfe63674ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.061857357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401369052837039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832a7f68eca906b8f8b78a8578c2f0afaf2986a8f73d21dc599dd73aa4aa9ca5,PodSandboxId:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727401348878219805,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0,PodSandboxId:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401345652004463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727401338338708415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f,PodSandboxId:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727401338309613993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f5
04,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0,PodSandboxId:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401333596293674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05,PodSandboxId:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401333519879158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef,PodSandboxId:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401333504162598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647,PodSandboxId:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401333447335611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cd6c463-9a6f-4d67-be03-92bfe63674ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.100236558Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f68a4aec-94f1-4955-bd4f-f2bd95043df0 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.100314229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f68a4aec-94f1-4955-bd4f-f2bd95043df0 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.101316123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef9bb0a0-4b00-44a9-bb60-17a6233bbe94 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.101711680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402147101621710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef9bb0a0-4b00-44a9-bb60-17a6233bbe94 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.102432389Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfd63d54-11a1-4f5d-a0c5-18c563382772 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.102483443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfd63d54-11a1-4f5d-a0c5-18c563382772 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.102734926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401369052837039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832a7f68eca906b8f8b78a8578c2f0afaf2986a8f73d21dc599dd73aa4aa9ca5,PodSandboxId:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727401348878219805,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0,PodSandboxId:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401345652004463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727401338338708415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f,PodSandboxId:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727401338309613993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f5
04,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0,PodSandboxId:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401333596293674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05,PodSandboxId:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401333519879158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef,PodSandboxId:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401333504162598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647,PodSandboxId:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401333447335611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfd63d54-11a1-4f5d-a0c5-18c563382772 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.139750573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6da2d40-6db5-4770-bb26-c7eefbccc528 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.139827078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6da2d40-6db5-4770-bb26-c7eefbccc528 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.141054653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f6050f5-c7a2-4099-a451-c9a888db9aaa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.141514757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402147141487266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f6050f5-c7a2-4099-a451-c9a888db9aaa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.142160137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a66f361f-20f4-4b5f-899b-ed4ad50df791 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.142241655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a66f361f-20f4-4b5f-899b-ed4ad50df791 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.142446182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401369052837039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832a7f68eca906b8f8b78a8578c2f0afaf2986a8f73d21dc599dd73aa4aa9ca5,PodSandboxId:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727401348878219805,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0,PodSandboxId:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401345652004463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727401338338708415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f,PodSandboxId:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727401338309613993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f5
04,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0,PodSandboxId:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401333596293674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05,PodSandboxId:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401333519879158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef,PodSandboxId:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401333504162598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647,PodSandboxId:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401333447335611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a66f361f-20f4-4b5f-899b-ed4ad50df791 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.182925537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ca40f1e-672d-4d19-95cc-5e1bef3f4ed2 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.183007915Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ca40f1e-672d-4d19-95cc-5e1bef3f4ed2 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.184453683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3fa7a5a2-588f-49b4-a812-ba79f0c9673f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.184966798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402147184931244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3fa7a5a2-588f-49b4-a812-ba79f0c9673f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.185710615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0caba134-88f3-4126-a520-ee04c75611fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.185767227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0caba134-88f3-4126-a520-ee04c75611fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:47 no-preload-521072 crio[714]: time="2024-09-27 01:55:47.186016794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401369052837039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832a7f68eca906b8f8b78a8578c2f0afaf2986a8f73d21dc599dd73aa4aa9ca5,PodSandboxId:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727401348878219805,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0,PodSandboxId:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401345652004463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727401338338708415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f,PodSandboxId:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727401338309613993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f5
04,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0,PodSandboxId:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401333596293674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05,PodSandboxId:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401333519879158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef,PodSandboxId:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401333504162598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647,PodSandboxId:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401333447335611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0caba134-88f3-4126-a520-ee04c75611fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8b91015e1bfce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   9975596dc9c0b       storage-provisioner
	832a7f68eca90       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   a20e2c9b208a0       busybox
	5a757b127a9ab       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   dbcf3ee6d4d0b       coredns-7c65d6cfc9-7q54t
	074b4636352f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   9975596dc9c0b       storage-provisioner
	d44b4389046f9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   69c5b273b6853       kube-proxy-wkcb8
	703936dc7e81f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   38a38f07872e8       etcd-no-preload-521072
	22e50606ae328       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   f271cec15bbe8       kube-scheduler-no-preload-521072
	d5488a6ee0ac8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   76615b305a8b4       kube-apiserver-no-preload-521072
	56ed48053950b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   c4ee5cc7c6253       kube-controller-manager-no-preload-521072
	
	
	==> coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46063 - 60754 "HINFO IN 4081009560286700448.717705552608654863. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.034835274s
	
	
	==> describe nodes <==
	Name:               no-preload-521072
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-521072
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=no-preload-521072
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_32_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:32:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-521072
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:55:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:52:58 +0000   Fri, 27 Sep 2024 01:32:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:52:58 +0000   Fri, 27 Sep 2024 01:32:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:52:58 +0000   Fri, 27 Sep 2024 01:32:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:52:58 +0000   Fri, 27 Sep 2024 01:42:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.246
	  Hostname:    no-preload-521072
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4d3d92178f544bd8b9e5f9464d5796b
	  System UUID:                b4d3d921-78f5-44bd-8b9e-5f9464d5796b
	  Boot ID:                    125f112c-b20d-4947-b382-b5df32c753c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-7q54t                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-521072                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-521072             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-521072    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-wkcb8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-521072             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-cc9pp              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-521072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-521072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-521072 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-521072 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-521072 event: Registered Node no-preload-521072 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-521072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-521072 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-521072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-521072 event: Registered Node no-preload-521072 in Controller
	
	
	==> dmesg <==
	[Sep27 01:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051871] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.026914] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.547868] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621717] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.271130] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.064478] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062998] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.174303] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.172334] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.308872] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[Sep27 01:42] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	[  +0.058722] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.334460] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +3.304612] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.195729] systemd-fstab-generator[1998]: Ignoring "noauto" option for root device
	[  +0.118715] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.726397] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] <==
	{"level":"info","ts":"2024-09-27T01:42:14.460141Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:42:14.463278Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T01:42:14.466926Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"26a48da650cf9008","initial-advertise-peer-urls":["https://192.168.50.246:2380"],"listen-peer-urls":["https://192.168.50.246:2380"],"advertise-client-urls":["https://192.168.50.246:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.246:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T01:42:14.466981Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T01:42:14.463734Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-09-27T01:42:14.467697Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-09-27T01:42:15.763221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T01:42:15.763284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T01:42:15.763319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgPreVoteResp from 26a48da650cf9008 at term 2"}
	{"level":"info","ts":"2024-09-27T01:42:15.763350Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T01:42:15.763356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgVoteResp from 26a48da650cf9008 at term 3"}
	{"level":"info","ts":"2024-09-27T01:42:15.763367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T01:42:15.763380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 26a48da650cf9008 elected leader 26a48da650cf9008 at term 3"}
	{"level":"info","ts":"2024-09-27T01:42:15.765434Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:42:15.766816Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"26a48da650cf9008","local-member-attributes":"{Name:no-preload-521072 ClientURLs:[https://192.168.50.246:2379]}","request-path":"/0/members/26a48da650cf9008/attributes","cluster-id":"4445e918310c0aa2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:42:15.767061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:42:15.767256Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:42:15.767390Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:42:15.767430Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:42:15.768138Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:42:15.768213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T01:42:15.768910Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.246:2379"}
	{"level":"info","ts":"2024-09-27T01:52:15.803326Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":838}
	{"level":"info","ts":"2024-09-27T01:52:15.813520Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":838,"took":"9.577902ms","hash":193172960,"current-db-size-bytes":2719744,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2719744,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-27T01:52:15.813612Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":193172960,"revision":838,"compact-revision":-1}
	
	
	==> kernel <==
	 01:55:47 up 14 min,  0 users,  load average: 0.27, 0.17, 0.11
	Linux no-preload-521072 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] <==
	W0927 01:52:18.123772       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:52:18.123944       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:52:18.125179       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:52:18.125225       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 01:53:18.126242       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:53:18.126325       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 01:53:18.126377       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:53:18.126410       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:53:18.127463       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:53:18.127508       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 01:55:18.128068       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:55:18.128406       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 01:55:18.128110       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:55:18.128567       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:55:18.129742       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:55:18.129773       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] <==
	E0927 01:50:22.742397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:50:23.222168       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:50:52.749341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:50:53.229520       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:51:22.756351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:51:23.237728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:51:52.762579       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:51:53.246341       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:52:22.769286       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:52:23.254262       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:52:52.776025       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:52:53.261885       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:52:58.994720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-521072"
	E0927 01:53:22.782792       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:53:23.270729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:53:34.874442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="305.514µs"
	I0927 01:53:47.874020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="187.119µs"
	E0927 01:53:52.788778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:53:53.279373       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:54:22.799215       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:54:23.287302       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:54:52.805977       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:54:53.295843       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:55:22.812131       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:55:23.305758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:42:18.546176       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:42:18.554753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.246"]
	E0927 01:42:18.554979       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:42:18.591620       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:42:18.591775       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:42:18.591818       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:42:18.594391       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:42:18.594803       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:42:18.594851       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:42:18.596622       1 config.go:199] "Starting service config controller"
	I0927 01:42:18.596755       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:42:18.596808       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:42:18.596826       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:42:18.597297       1 config.go:328] "Starting node config controller"
	I0927 01:42:18.597760       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:42:18.697461       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:42:18.697601       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:42:18.699030       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] <==
	I0927 01:42:14.776806       1 serving.go:386] Generated self-signed cert in-memory
	W0927 01:42:17.114966       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 01:42:17.115081       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 01:42:17.115094       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 01:42:17.115102       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 01:42:17.155844       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 01:42:17.155891       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:42:17.161078       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 01:42:17.161500       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 01:42:17.161896       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:42:17.162156       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 01:42:17.263186       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 01:54:38 no-preload-521072 kubelet[1371]: E0927 01:54:38.857889    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 01:54:43 no-preload-521072 kubelet[1371]: E0927 01:54:43.056358    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402083055963951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:43 no-preload-521072 kubelet[1371]: E0927 01:54:43.056854    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402083055963951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:52 no-preload-521072 kubelet[1371]: E0927 01:54:52.857449    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 01:54:53 no-preload-521072 kubelet[1371]: E0927 01:54:53.058382    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402093058119998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:53 no-preload-521072 kubelet[1371]: E0927 01:54:53.058405    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402093058119998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:03 no-preload-521072 kubelet[1371]: E0927 01:55:03.059823    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402103059357765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:03 no-preload-521072 kubelet[1371]: E0927 01:55:03.060267    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402103059357765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:06 no-preload-521072 kubelet[1371]: E0927 01:55:06.859744    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 01:55:12 no-preload-521072 kubelet[1371]: E0927 01:55:12.881560    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:55:12 no-preload-521072 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:55:12 no-preload-521072 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:55:12 no-preload-521072 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:55:12 no-preload-521072 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:55:13 no-preload-521072 kubelet[1371]: E0927 01:55:13.062080    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402113061581840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:13 no-preload-521072 kubelet[1371]: E0927 01:55:13.062125    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402113061581840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:18 no-preload-521072 kubelet[1371]: E0927 01:55:18.857871    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 01:55:23 no-preload-521072 kubelet[1371]: E0927 01:55:23.064569    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402123064158512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:23 no-preload-521072 kubelet[1371]: E0927 01:55:23.065073    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402123064158512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:30 no-preload-521072 kubelet[1371]: E0927 01:55:30.860393    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 01:55:33 no-preload-521072 kubelet[1371]: E0927 01:55:33.067402    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402133067015301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:33 no-preload-521072 kubelet[1371]: E0927 01:55:33.068092    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402133067015301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:43 no-preload-521072 kubelet[1371]: E0927 01:55:43.070249    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402143069861455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:43 no-preload-521072 kubelet[1371]: E0927 01:55:43.070299    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402143069861455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:45 no-preload-521072 kubelet[1371]: E0927 01:55:45.857118    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	
	
	==> storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] <==
	I0927 01:42:18.478720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0927 01:42:48.482506       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] <==
	I0927 01:42:49.151762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:42:49.162005       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:42:49.162080       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:43:06.562249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:43:06.562385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-521072_eba5b60f-c2e6-43e6-bc1c-a3ec146ac13a!
	I0927 01:43:06.565059       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7c9c51c-2666-4847-92a6-a6408cdf07dd", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-521072_eba5b60f-c2e6-43e6-bc1c-a3ec146ac13a became leader
	I0927 01:43:06.664485       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-521072_eba5b60f-c2e6-43e6-bc1c-a3ec146ac13a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521072 -n no-preload-521072
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-521072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cc9pp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-521072 describe pod metrics-server-6867b74b74-cc9pp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-521072 describe pod metrics-server-6867b74b74-cc9pp: exit status 1 (59.006395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cc9pp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-521072 describe pod metrics-server-6867b74b74-cc9pp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0927 01:48:01.244841   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-27 01:55:51.017452696 +0000 UTC m=+6067.203060949
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-368295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-368295 logs -n 25: (2.255154s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| stop    | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:32 UTC |
	| start   | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:33 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-521072             | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-595331                              | cert-expiration-595331       | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| delete  | -p                                                     | disable-driver-mounts-630210 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | disable-driver-mounts-630210                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:35 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-245911            | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-368295  | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-521072                  | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-612261        | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-245911                 | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-612261             | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-368295       | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:37:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:37:48.335921   69534 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:37:48.336188   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336196   69534 out.go:358] Setting ErrFile to fd 2...
	I0927 01:37:48.336201   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336368   69534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:37:48.336901   69534 out.go:352] Setting JSON to false
	I0927 01:37:48.337754   69534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8413,"bootTime":1727392655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:37:48.337841   69534 start.go:139] virtualization: kvm guest
	I0927 01:37:48.340035   69534 out.go:177] * [default-k8s-diff-port-368295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:37:48.341151   69534 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:37:48.341211   69534 notify.go:220] Checking for updates...
	I0927 01:37:48.343607   69534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:37:48.344933   69534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:37:48.346113   69534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:37:48.347142   69534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:37:48.348261   69534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:37:48.349842   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:37:48.350212   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.350278   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.365272   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0927 01:37:48.365662   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.366137   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.366162   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.366548   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.366713   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.366938   69534 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:37:48.367236   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.367265   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.381678   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0927 01:37:48.382169   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.382627   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.382650   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.382911   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.383023   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.415092   69534 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:37:48.416340   69534 start.go:297] selected driver: kvm2
	I0927 01:37:48.416354   69534 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.416459   69534 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:37:48.417093   69534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.417164   69534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:37:48.432138   69534 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:37:48.432534   69534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:37:48.432563   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:37:48.432604   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:37:48.432635   69534 start.go:340] cluster config:
	{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.432737   69534 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.435057   69534 out.go:177] * Starting "default-k8s-diff-port-368295" primary control-plane node in "default-k8s-diff-port-368295" cluster
	I0927 01:37:48.436502   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:37:48.436543   69534 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 01:37:48.436557   69534 cache.go:56] Caching tarball of preloaded images
	I0927 01:37:48.436624   69534 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:37:48.436634   69534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 01:37:48.436718   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:37:48.436885   69534 start.go:360] acquireMachinesLock for default-k8s-diff-port-368295: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:37:50.823565   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:53.895575   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:59.975554   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:03.047567   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:09.127558   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:12.199592   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:18.279516   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:21.351643   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:27.435515   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:30.503604   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:36.583590   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:39.655593   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:45.735581   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:48.807587   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:54.887542   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:57.959601   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:04.039570   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:07.111555   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:13.191559   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:16.263625   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:22.343607   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:25.415561   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:31.495531   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:34.567598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:40.647577   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:43.719602   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:49.799620   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:52.871596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:58.951600   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:02.023635   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:08.103596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:11.175614   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:17.255583   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:20.327522   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:26.407598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:29.479580   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:32.484148   69234 start.go:364] duration metric: took 3m6.827897292s to acquireMachinesLock for "embed-certs-245911"
	I0927 01:40:32.484202   69234 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:32.484210   69234 fix.go:54] fixHost starting: 
	I0927 01:40:32.484708   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:32.484758   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:32.500356   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0927 01:40:32.500869   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:32.501356   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:40:32.501376   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:32.501678   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:32.501872   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:32.502014   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:40:32.503863   69234 fix.go:112] recreateIfNeeded on embed-certs-245911: state=Stopped err=<nil>
	I0927 01:40:32.503884   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	W0927 01:40:32.504047   69234 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:32.506829   69234 out.go:177] * Restarting existing kvm2 VM for "embed-certs-245911" ...
	I0927 01:40:32.481407   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:32.481445   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.481786   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:40:32.481815   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.482031   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:40:32.483999   68676 machine.go:96] duration metric: took 4m37.428764548s to provisionDockerMachine
	I0927 01:40:32.484048   68676 fix.go:56] duration metric: took 4m37.449461246s for fixHost
	I0927 01:40:32.484055   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 4m37.449534693s
	W0927 01:40:32.484075   68676 start.go:714] error starting host: provision: host is not running
	W0927 01:40:32.484176   68676 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0927 01:40:32.484183   68676 start.go:729] Will try again in 5 seconds ...
	I0927 01:40:32.508417   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Start
	I0927 01:40:32.508598   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring networks are active...
	I0927 01:40:32.509477   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network default is active
	I0927 01:40:32.509830   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network mk-embed-certs-245911 is active
	I0927 01:40:32.510208   69234 main.go:141] libmachine: (embed-certs-245911) Getting domain xml...
	I0927 01:40:32.510838   69234 main.go:141] libmachine: (embed-certs-245911) Creating domain...
	I0927 01:40:33.718381   69234 main.go:141] libmachine: (embed-certs-245911) Waiting to get IP...
	I0927 01:40:33.719223   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.719554   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.719611   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.719550   70125 retry.go:31] will retry after 265.21442ms: waiting for machine to come up
	I0927 01:40:33.986199   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.986700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.986728   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.986658   70125 retry.go:31] will retry after 308.926274ms: waiting for machine to come up
	I0927 01:40:34.297317   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.297734   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.297755   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.297697   70125 retry.go:31] will retry after 466.52815ms: waiting for machine to come up
	I0927 01:40:34.765171   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.765616   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.765643   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.765570   70125 retry.go:31] will retry after 510.417499ms: waiting for machine to come up
	I0927 01:40:35.277175   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.277547   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.277576   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.277488   70125 retry.go:31] will retry after 522.865286ms: waiting for machine to come up
	I0927 01:40:37.485696   68676 start.go:360] acquireMachinesLock for no-preload-521072: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:40:35.802177   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.802620   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.802646   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.802584   70125 retry.go:31] will retry after 611.490499ms: waiting for machine to come up
	I0927 01:40:36.415249   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:36.415733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:36.415793   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:36.415709   70125 retry.go:31] will retry after 744.420766ms: waiting for machine to come up
	I0927 01:40:37.161647   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:37.162076   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:37.162112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:37.162022   70125 retry.go:31] will retry after 1.464523837s: waiting for machine to come up
	I0927 01:40:38.627935   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:38.628275   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:38.628302   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:38.628237   70125 retry.go:31] will retry after 1.840524237s: waiting for machine to come up
	I0927 01:40:40.471433   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:40.471823   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:40.471851   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:40.471781   70125 retry.go:31] will retry after 1.9424331s: waiting for machine to come up
	I0927 01:40:42.416527   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:42.416978   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:42.417007   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:42.416935   70125 retry.go:31] will retry after 2.553410529s: waiting for machine to come up
	I0927 01:40:44.973083   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:44.973446   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:44.973465   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:44.973402   70125 retry.go:31] will retry after 3.286267983s: waiting for machine to come up
	I0927 01:40:48.260792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:48.261216   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:48.261241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:48.261179   70125 retry.go:31] will retry after 3.302667041s: waiting for machine to come up
	I0927 01:40:52.800240   69333 start.go:364] duration metric: took 3m25.347970249s to acquireMachinesLock for "old-k8s-version-612261"
	I0927 01:40:52.800310   69333 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:52.800317   69333 fix.go:54] fixHost starting: 
	I0927 01:40:52.800742   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:52.800800   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:52.818217   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0927 01:40:52.818644   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:52.819065   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:40:52.819086   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:52.819408   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:52.819544   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:40:52.819646   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetState
	I0927 01:40:52.820921   69333 fix.go:112] recreateIfNeeded on old-k8s-version-612261: state=Stopped err=<nil>
	I0927 01:40:52.820956   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	W0927 01:40:52.821110   69333 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:52.823209   69333 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-612261" ...
	I0927 01:40:51.567691   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568205   69234 main.go:141] libmachine: (embed-certs-245911) Found IP for machine: 192.168.39.158
	I0927 01:40:51.568241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has current primary IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568250   69234 main.go:141] libmachine: (embed-certs-245911) Reserving static IP address...
	I0927 01:40:51.568731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.568764   69234 main.go:141] libmachine: (embed-certs-245911) DBG | skip adding static IP to network mk-embed-certs-245911 - found existing host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"}
	I0927 01:40:51.568781   69234 main.go:141] libmachine: (embed-certs-245911) Reserved static IP address: 192.168.39.158
	I0927 01:40:51.568798   69234 main.go:141] libmachine: (embed-certs-245911) Waiting for SSH to be available...
	I0927 01:40:51.568806   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Getting to WaitForSSH function...
	I0927 01:40:51.570819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571139   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.571167   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571321   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH client type: external
	I0927 01:40:51.571370   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa (-rw-------)
	I0927 01:40:51.571401   69234 main.go:141] libmachine: (embed-certs-245911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:40:51.571414   69234 main.go:141] libmachine: (embed-certs-245911) DBG | About to run SSH command:
	I0927 01:40:51.571422   69234 main.go:141] libmachine: (embed-certs-245911) DBG | exit 0
	I0927 01:40:51.691525   69234 main.go:141] libmachine: (embed-certs-245911) DBG | SSH cmd err, output: <nil>: 
	I0927 01:40:51.691953   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetConfigRaw
	I0927 01:40:51.692573   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:51.695121   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695541   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.695572   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695871   69234 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/config.json ...
	I0927 01:40:51.696087   69234 machine.go:93] provisionDockerMachine start ...
	I0927 01:40:51.696109   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:51.696312   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.698740   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699086   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.699112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699229   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.699415   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699552   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699679   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.699810   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.699998   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.700011   69234 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:40:51.799534   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:40:51.799559   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799764   69234 buildroot.go:166] provisioning hostname "embed-certs-245911"
	I0927 01:40:51.799792   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.802464   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.802844   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802960   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.803131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803290   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803502   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.803672   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.803868   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.803888   69234 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-245911 && echo "embed-certs-245911" | sudo tee /etc/hostname
	I0927 01:40:51.917988   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-245911
	
	I0927 01:40:51.918019   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.920484   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.920800   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.920831   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.921041   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.921224   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921383   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921511   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.921693   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.921883   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.921901   69234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-245911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-245911/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-245911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:40:52.028582   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:52.028609   69234 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:40:52.028672   69234 buildroot.go:174] setting up certificates
	I0927 01:40:52.028686   69234 provision.go:84] configureAuth start
	I0927 01:40:52.028704   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:52.029001   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.031742   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032088   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.032117   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032273   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.034392   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.034754   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034905   69234 provision.go:143] copyHostCerts
	I0927 01:40:52.034956   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:40:52.034969   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:40:52.035042   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:40:52.035172   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:40:52.035185   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:40:52.035224   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:40:52.035319   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:40:52.035329   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:40:52.035363   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:40:52.035433   69234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-245911 san=[127.0.0.1 192.168.39.158 embed-certs-245911 localhost minikube]
	I0927 01:40:52.206591   69234 provision.go:177] copyRemoteCerts
	I0927 01:40:52.206657   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:40:52.206724   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.209445   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209770   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.209792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209995   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.210234   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.210416   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.210578   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.290176   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:40:52.313645   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:40:52.336446   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:40:52.359182   69234 provision.go:87] duration metric: took 330.481958ms to configureAuth
	I0927 01:40:52.359214   69234 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:40:52.359464   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:40:52.359551   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.362163   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362488   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.362513   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362670   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.362826   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.362976   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.363133   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.363334   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.363532   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.363553   69234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:40:52.574326   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:40:52.574354   69234 machine.go:96] duration metric: took 878.253718ms to provisionDockerMachine
	I0927 01:40:52.574368   69234 start.go:293] postStartSetup for "embed-certs-245911" (driver="kvm2")
	I0927 01:40:52.574381   69234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:40:52.574398   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.574688   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:40:52.574714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.577727   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578035   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.578060   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578227   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.578411   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.578555   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.578735   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.658636   69234 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:40:52.663048   69234 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:40:52.663077   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:40:52.663147   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:40:52.663223   69234 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:40:52.663322   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:40:52.673347   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:52.697092   69234 start.go:296] duration metric: took 122.71069ms for postStartSetup
	I0927 01:40:52.697126   69234 fix.go:56] duration metric: took 20.212915975s for fixHost
	I0927 01:40:52.697145   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.699817   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700173   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.700202   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700364   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.700558   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700735   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700921   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.701097   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.701269   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.701285   69234 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:40:52.800080   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401252.775762391
	
	I0927 01:40:52.800102   69234 fix.go:216] guest clock: 1727401252.775762391
	I0927 01:40:52.800111   69234 fix.go:229] Guest: 2024-09-27 01:40:52.775762391 +0000 UTC Remote: 2024-09-27 01:40:52.697129165 +0000 UTC m=+207.179045808 (delta=78.633226ms)
	I0927 01:40:52.800145   69234 fix.go:200] guest clock delta is within tolerance: 78.633226ms
	I0927 01:40:52.800152   69234 start.go:83] releasing machines lock for "embed-certs-245911", held for 20.315972034s
	I0927 01:40:52.800183   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.800495   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.803196   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803657   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.803700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803874   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804419   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804610   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804731   69234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:40:52.804771   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.804813   69234 ssh_runner.go:195] Run: cat /version.json
	I0927 01:40:52.804837   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.807320   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807346   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807680   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807759   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807807   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807916   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808070   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808150   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808262   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808331   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808384   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808468   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.808522   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.908963   69234 ssh_runner.go:195] Run: systemctl --version
	I0927 01:40:52.915158   69234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:40:53.067605   69234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:40:53.074171   69234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:40:53.074241   69234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:40:53.091718   69234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:40:53.091742   69234 start.go:495] detecting cgroup driver to use...
	I0927 01:40:53.091813   69234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:40:53.108730   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:40:53.122920   69234 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:40:53.122984   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:40:53.137487   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:40:53.152420   69234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:40:53.269491   69234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:40:53.417893   69234 docker.go:233] disabling docker service ...
	I0927 01:40:53.417951   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:40:53.442201   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:40:53.459920   69234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:40:53.589768   69234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:40:53.719203   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:40:53.733145   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:40:53.751853   69234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:40:53.751919   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.763230   69234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:40:53.763294   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.774864   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.786149   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.797167   69234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:40:53.808495   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.819285   69234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.838497   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.850490   69234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:40:53.860309   69234 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:40:53.860377   69234 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:40:53.875533   69234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:40:53.885752   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:54.014352   69234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:40:54.107866   69234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:40:54.107926   69234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:40:54.113206   69234 start.go:563] Will wait 60s for crictl version
	I0927 01:40:54.113256   69234 ssh_runner.go:195] Run: which crictl
	I0927 01:40:54.117229   69234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:40:54.156365   69234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:40:54.156459   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.183974   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.214440   69234 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:40:54.215714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:54.218624   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.218975   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:54.219013   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.219180   69234 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 01:40:54.223450   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:54.236761   69234 kubeadm.go:883] updating cluster {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:40:54.236923   69234 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:40:54.236989   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:54.276635   69234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:40:54.276708   69234 ssh_runner.go:195] Run: which lz4
	I0927 01:40:54.281055   69234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:40:54.285439   69234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:40:54.285472   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:40:52.824650   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .Start
	I0927 01:40:52.824802   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring networks are active...
	I0927 01:40:52.825590   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network default is active
	I0927 01:40:52.825908   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network mk-old-k8s-version-612261 is active
	I0927 01:40:52.826326   69333 main.go:141] libmachine: (old-k8s-version-612261) Getting domain xml...
	I0927 01:40:52.827108   69333 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:40:54.071322   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting to get IP...
	I0927 01:40:54.072357   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.072756   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.072821   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.072738   70279 retry.go:31] will retry after 264.648837ms: waiting for machine to come up
	I0927 01:40:54.339366   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.339799   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.339827   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.339731   70279 retry.go:31] will retry after 343.432635ms: waiting for machine to come up
	I0927 01:40:54.685260   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.685746   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.685780   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.685714   70279 retry.go:31] will retry after 455.276623ms: waiting for machine to come up
	I0927 01:40:55.142206   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.142679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.142708   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.142637   70279 retry.go:31] will retry after 419.074502ms: waiting for machine to come up
	I0927 01:40:55.563324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.565342   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.565368   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.565287   70279 retry.go:31] will retry after 587.161471ms: waiting for machine to come up
	I0927 01:40:56.154584   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.155182   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.155220   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.155109   70279 retry.go:31] will retry after 782.426926ms: waiting for machine to come up
	I0927 01:40:56.938784   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.939201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.939228   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.939132   70279 retry.go:31] will retry after 781.231902ms: waiting for machine to come up
	I0927 01:40:55.723619   69234 crio.go:462] duration metric: took 1.442589436s to copy over tarball
	I0927 01:40:55.723705   69234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:40:57.775673   69234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051936146s)
	I0927 01:40:57.775697   69234 crio.go:469] duration metric: took 2.052045538s to extract the tarball
	I0927 01:40:57.775704   69234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:40:57.812769   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:57.853219   69234 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:40:57.853240   69234 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:40:57.853248   69234 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0927 01:40:57.853354   69234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-245911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:40:57.853495   69234 ssh_runner.go:195] Run: crio config
	I0927 01:40:57.908273   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:40:57.908301   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:40:57.908322   69234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:40:57.908356   69234 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-245911 NodeName:embed-certs-245911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:40:57.908542   69234 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-245911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:40:57.908613   69234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:40:57.918923   69234 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:40:57.919021   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:40:57.928576   69234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0927 01:40:57.945515   69234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:40:57.962239   69234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0927 01:40:57.979722   69234 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0927 01:40:57.983709   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:57.996181   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:58.119502   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:40:58.137022   69234 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911 for IP: 192.168.39.158
	I0927 01:40:58.137048   69234 certs.go:194] generating shared ca certs ...
	I0927 01:40:58.137068   69234 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:40:58.137250   69234 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:40:58.137312   69234 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:40:58.137324   69234 certs.go:256] generating profile certs ...
	I0927 01:40:58.137444   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/client.key
	I0927 01:40:58.137522   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key.e289c840
	I0927 01:40:58.137574   69234 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key
	I0927 01:40:58.137731   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:40:58.137774   69234 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:40:58.137787   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:40:58.137819   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:40:58.137850   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:40:58.137883   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:40:58.137928   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:58.138551   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:40:58.179399   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:40:58.211297   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:40:58.245549   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:40:58.276837   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0927 01:40:58.313750   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:40:58.338145   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:40:58.361373   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:40:58.384790   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:40:58.407617   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:40:58.430621   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:40:58.453382   69234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:40:58.470177   69234 ssh_runner.go:195] Run: openssl version
	I0927 01:40:58.476280   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:40:58.489039   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493726   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493780   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.499856   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:40:58.511032   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:40:58.521694   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.525991   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.526031   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.531619   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:40:58.542017   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:40:58.552591   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557047   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557086   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.562874   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:40:58.574052   69234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:40:58.578537   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:40:58.584323   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:40:58.590033   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:40:58.596013   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:40:58.601572   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:40:58.606980   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:40:58.612554   69234 kubeadm.go:392] StartCluster: {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:40:58.612648   69234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:40:58.612704   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.649228   69234 cri.go:89] found id: ""
	I0927 01:40:58.649306   69234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:40:58.661599   69234 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:40:58.661628   69234 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:40:58.661688   69234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:40:58.671907   69234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:40:58.672851   69234 kubeconfig.go:125] found "embed-certs-245911" server: "https://192.168.39.158:8443"
	I0927 01:40:58.674753   69234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:40:58.684614   69234 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.158
	I0927 01:40:58.684643   69234 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:40:58.684652   69234 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:40:58.684715   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.726714   69234 cri.go:89] found id: ""
	I0927 01:40:58.726816   69234 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:40:58.743675   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:40:58.753456   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:40:58.753485   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:40:58.753535   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:40:58.762724   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:40:58.762821   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:40:58.772558   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:40:58.781732   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:40:58.781790   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:40:58.791109   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.800066   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:40:58.800127   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.809338   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:40:58.818214   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:40:58.818260   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:40:58.828049   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:40:58.837606   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:58.942395   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.758951   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.966377   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.036702   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.126663   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:00.126743   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:40:57.722147   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:57.722637   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:57.722657   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:57.722593   70279 retry.go:31] will retry after 1.223133601s: waiting for machine to come up
	I0927 01:40:58.947836   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:58.948362   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:58.948388   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:58.948326   70279 retry.go:31] will retry after 1.155368003s: waiting for machine to come up
	I0927 01:41:00.105812   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:00.106288   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:00.106356   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:00.106280   70279 retry.go:31] will retry after 2.324904017s: waiting for machine to come up
	I0927 01:41:00.627542   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.126971   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.626940   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.127478   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.176746   69234 api_server.go:72] duration metric: took 2.050081672s to wait for apiserver process to appear ...
	I0927 01:41:02.176775   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:02.176798   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:02.177442   69234 api_server.go:269] stopped: https://192.168.39.158:8443/healthz: Get "https://192.168.39.158:8443/healthz": dial tcp 192.168.39.158:8443: connect: connection refused
	I0927 01:41:02.677488   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.824718   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.824748   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:04.824763   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.850790   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.850820   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:05.177167   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.201660   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.201696   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:02.432597   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:02.433066   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:02.433096   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:02.433026   70279 retry.go:31] will retry after 2.598889471s: waiting for machine to come up
	I0927 01:41:05.034614   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:05.035001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:05.035023   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:05.034973   70279 retry.go:31] will retry after 3.064943329s: waiting for machine to come up
	I0927 01:41:05.677514   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.683506   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.683543   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.177064   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.181304   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.181339   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.676872   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.681269   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.681297   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.176902   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.181397   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.181425   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.677457   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.682057   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.682087   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:08.177696   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:08.181752   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:41:08.188257   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:08.188278   69234 api_server.go:131] duration metric: took 6.011495616s to wait for apiserver health ...
	I0927 01:41:08.188285   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:41:08.188291   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:08.190206   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:08.191584   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:08.202370   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:08.224843   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:08.234247   69234 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:08.234275   69234 system_pods.go:61] "coredns-7c65d6cfc9-f2vxv" [3eed941e-e943-490b-a0a8-d543cec18a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:08.234284   69234 system_pods.go:61] "etcd-embed-certs-245911" [f88581ff-3747-4fe5-a4a2-6259c3b4554e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:08.234291   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [3f1efb25-6e30-4d5f-baba-3e98b6fe531e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:08.234298   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [a624fc8d-fbe3-4b63-8a88-5f8069b21095] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:08.234302   69234 system_pods.go:61] "kube-proxy-pjf8v" [a1b76e67-803a-43fe-bff6-a4b0ddc246a1] Running
	I0927 01:41:08.234309   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [0f7c146b-e2b7-4110-b010-f4599d0da410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:08.234313   69234 system_pods.go:61] "metrics-server-6867b74b74-k8mdf" [6d1e68fb-5187-4bc6-abdb-44f598e351c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:08.234317   69234 system_pods.go:61] "storage-provisioner" [dc0a7806-bee8-4127-8218-b2e48fa8500b] Running
	I0927 01:41:08.234323   69234 system_pods.go:74] duration metric: took 9.462578ms to wait for pod list to return data ...
	I0927 01:41:08.234333   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:08.238433   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:08.238455   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:08.238468   69234 node_conditions.go:105] duration metric: took 4.128775ms to run NodePressure ...
	I0927 01:41:08.238483   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:08.502161   69234 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506267   69234 kubeadm.go:739] kubelet initialised
	I0927 01:41:08.506290   69234 kubeadm.go:740] duration metric: took 4.099692ms waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506299   69234 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:08.510964   69234 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.515262   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515279   69234 pod_ready.go:82] duration metric: took 4.294632ms for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.515286   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515298   69234 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.519627   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519641   69234 pod_ready.go:82] duration metric: took 4.313975ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.519648   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519653   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.523152   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523165   69234 pod_ready.go:82] duration metric: took 3.50412ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.523177   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523186   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.628811   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628847   69234 pod_ready.go:82] duration metric: took 105.648464ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.628859   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628868   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027358   69234 pod_ready.go:93] pod "kube-proxy-pjf8v" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:09.027383   69234 pod_ready.go:82] duration metric: took 398.507928ms for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027393   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.101834   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:08.102324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:08.102358   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:08.102283   70279 retry.go:31] will retry after 4.242138543s: waiting for machine to come up
	I0927 01:41:13.708458   69534 start.go:364] duration metric: took 3m25.271525685s to acquireMachinesLock for "default-k8s-diff-port-368295"
	I0927 01:41:13.708525   69534 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:13.708533   69534 fix.go:54] fixHost starting: 
	I0927 01:41:13.708923   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:13.708979   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:13.726306   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0927 01:41:13.726732   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:13.727228   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:41:13.727252   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:13.727579   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:13.727781   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:13.727975   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:41:13.729621   69534 fix.go:112] recreateIfNeeded on default-k8s-diff-port-368295: state=Stopped err=<nil>
	I0927 01:41:13.729657   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	W0927 01:41:13.729826   69534 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:13.731730   69534 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-368295" ...
	I0927 01:41:12.347378   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.347831   69333 main.go:141] libmachine: (old-k8s-version-612261) Found IP for machine: 192.168.72.129
	I0927 01:41:12.347855   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserving static IP address...
	I0927 01:41:12.347872   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has current primary IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.348468   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.348494   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserved static IP address: 192.168.72.129
	I0927 01:41:12.348507   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | skip adding static IP to network mk-old-k8s-version-612261 - found existing host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"}
	I0927 01:41:12.348518   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting for SSH to be available...
	I0927 01:41:12.348537   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Getting to WaitForSSH function...
	I0927 01:41:12.350917   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351287   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.351335   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351464   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH client type: external
	I0927 01:41:12.351485   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa (-rw-------)
	I0927 01:41:12.351516   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:12.351525   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | About to run SSH command:
	I0927 01:41:12.351533   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | exit 0
	I0927 01:41:12.471347   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:12.471724   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:41:12.472352   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.474886   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475299   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.475340   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475628   69333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:41:12.475857   69333 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:12.475879   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:12.476115   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.478594   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.478918   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.478945   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.479126   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.479340   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479536   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479695   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.479859   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.480093   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.480116   69333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:12.579536   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:12.579562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579785   69333 buildroot.go:166] provisioning hostname "old-k8s-version-612261"
	I0927 01:41:12.579798   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579965   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.582679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.583027   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583166   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.583372   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583727   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.583924   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.584169   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.584187   69333 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-612261 && echo "old-k8s-version-612261" | sudo tee /etc/hostname
	I0927 01:41:12.702223   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-612261
	
	I0927 01:41:12.702252   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.705201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705564   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.705601   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705817   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.706012   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706154   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706344   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.706538   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.706720   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.706738   69333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-612261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-612261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-612261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:12.816316   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:12.816343   69333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:12.816376   69333 buildroot.go:174] setting up certificates
	I0927 01:41:12.816386   69333 provision.go:84] configureAuth start
	I0927 01:41:12.816394   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.816678   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.819190   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819487   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.819511   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.821843   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822166   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.822203   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822382   69333 provision.go:143] copyHostCerts
	I0927 01:41:12.822453   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:12.822466   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:12.822533   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:12.822641   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:12.822650   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:12.822682   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:12.822756   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:12.822766   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:12.822792   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:12.822859   69333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-612261 san=[127.0.0.1 192.168.72.129 localhost minikube old-k8s-version-612261]
	I0927 01:41:13.054632   69333 provision.go:177] copyRemoteCerts
	I0927 01:41:13.054706   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:13.054740   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.057895   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058296   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.058329   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058478   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.058696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.058907   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.059062   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.146378   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:13.176435   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:41:13.208974   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 01:41:13.240179   69333 provision.go:87] duration metric: took 423.77487ms to configureAuth
	I0927 01:41:13.240211   69333 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:13.240412   69333 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:41:13.240498   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.243514   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.243963   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.243991   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.244174   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.244419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244641   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244838   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.245039   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.245263   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.245284   69333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:13.476519   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:13.476545   69333 machine.go:96] duration metric: took 1.000674334s to provisionDockerMachine
	I0927 01:41:13.476558   69333 start.go:293] postStartSetup for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:41:13.476574   69333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:13.476593   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.476914   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:13.476942   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.479326   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479662   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.479686   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479835   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.480027   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.480182   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.480337   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.563321   69333 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:13.567844   69333 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:13.567867   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:13.567929   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:13.568012   69333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:13.568109   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:13.578453   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:13.603888   69333 start.go:296] duration metric: took 127.316429ms for postStartSetup
	I0927 01:41:13.603924   69333 fix.go:56] duration metric: took 20.803606957s for fixHost
	I0927 01:41:13.603948   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.606500   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.606921   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.606949   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.607189   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.607419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607600   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607726   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.608048   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.608234   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.608245   69333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:13.708261   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401273.683707076
	
	I0927 01:41:13.708284   69333 fix.go:216] guest clock: 1727401273.683707076
	I0927 01:41:13.708293   69333 fix.go:229] Guest: 2024-09-27 01:41:13.683707076 +0000 UTC Remote: 2024-09-27 01:41:13.603929237 +0000 UTC m=+226.291347697 (delta=79.777839ms)
	I0927 01:41:13.708348   69333 fix.go:200] guest clock delta is within tolerance: 79.777839ms
	I0927 01:41:13.708357   69333 start.go:83] releasing machines lock for "old-k8s-version-612261", held for 20.90807118s
	I0927 01:41:13.708392   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.708665   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:13.711474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.711873   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.711905   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.712035   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712569   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712748   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712832   69333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:13.712878   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.712949   69333 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:13.712971   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.715681   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.715820   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716024   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716043   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716200   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716225   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716235   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716370   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716487   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716548   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716622   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716728   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716779   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.716859   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.826638   69333 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:13.832901   69333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:13.986132   69333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:13.992644   69333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:13.992728   69333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:14.008962   69333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:14.008991   69333 start.go:495] detecting cgroup driver to use...
	I0927 01:41:14.009051   69333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:14.025047   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:14.040807   69333 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:14.040857   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:14.055972   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:14.072654   69333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:14.210869   69333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:14.403536   69333 docker.go:233] disabling docker service ...
	I0927 01:41:14.403596   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:14.421549   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:14.436288   69333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:14.569634   69333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:14.701517   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:14.716794   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:14.740622   69333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:41:14.740685   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.756563   69333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:14.756626   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.768952   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.781314   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.793578   69333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:14.806302   69333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:14.822967   69333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:14.823036   69333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:14.837673   69333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:14.848486   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:14.988181   69333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:15.100581   69333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:15.100664   69333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:15.105816   69333 start.go:563] Will wait 60s for crictl version
	I0927 01:41:15.105883   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:15.110375   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:15.154944   69333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:15.155039   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.188172   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.220410   69333 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:41:11.033747   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:13.038930   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:15.035610   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:15.035636   69234 pod_ready.go:82] duration metric: took 6.008237321s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.035645   69234 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.221508   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:15.224474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.224855   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:15.224884   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.225126   69333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:15.229555   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:15.244862   69333 kubeadm.go:883] updating cluster {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:15.245007   69333 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:41:15.245070   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:15.298422   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:15.298501   69333 ssh_runner.go:195] Run: which lz4
	I0927 01:41:15.302771   69333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:15.307360   69333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:15.307398   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:41:17.053272   69333 crio.go:462] duration metric: took 1.750548806s to copy over tarball
	I0927 01:41:17.053354   69333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:13.732810   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Start
	I0927 01:41:13.732979   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring networks are active...
	I0927 01:41:13.733749   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network default is active
	I0927 01:41:13.734076   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network mk-default-k8s-diff-port-368295 is active
	I0927 01:41:13.734425   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Getting domain xml...
	I0927 01:41:13.734997   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Creating domain...
	I0927 01:41:15.073415   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting to get IP...
	I0927 01:41:15.074278   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074774   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074850   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.074757   70444 retry.go:31] will retry after 231.356774ms: waiting for machine to come up
	I0927 01:41:15.308474   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309058   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.308989   70444 retry.go:31] will retry after 252.762152ms: waiting for machine to come up
	I0927 01:41:15.563638   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564173   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564212   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.564130   70444 retry.go:31] will retry after 341.067908ms: waiting for machine to come up
	I0927 01:41:15.906735   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907138   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907168   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.907091   70444 retry.go:31] will retry after 385.816363ms: waiting for machine to come up
	I0927 01:41:16.294523   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295246   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295268   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.295192   70444 retry.go:31] will retry after 575.812339ms: waiting for machine to come up
	I0927 01:41:16.873050   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873574   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873601   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.873520   70444 retry.go:31] will retry after 661.914855ms: waiting for machine to come up
	I0927 01:41:17.537039   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537516   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537544   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:17.537467   70444 retry.go:31] will retry after 959.195147ms: waiting for machine to come up
	I0927 01:41:17.043983   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.543159   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:20.066231   69333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012846531s)
	I0927 01:41:20.066257   69333 crio.go:469] duration metric: took 3.012954388s to extract the tarball
	I0927 01:41:20.066265   69333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:20.112486   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:20.152620   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:20.152647   69333 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:20.152723   69333 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.152754   69333 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.152789   69333 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.152813   69333 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.152816   69333 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.152763   69333 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.152938   69333 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:41:20.152940   69333 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154747   69333 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.154752   69333 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.154886   69333 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.154925   69333 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.154930   69333 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.154934   69333 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:41:20.316172   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.316352   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:41:20.319986   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.331224   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.342010   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.355732   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.355739   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.446420   69333 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:41:20.446477   69333 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.446529   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.469134   69333 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:41:20.469183   69333 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.469231   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.470229   69333 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:41:20.470264   69333 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:41:20.470310   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.477952   69333 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:41:20.477991   69333 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.478034   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.519340   69333 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:41:20.519391   69333 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.519454   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538237   69333 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:41:20.538256   69333 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:41:20.538293   69333 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.538298   69333 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.538389   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.538438   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.538489   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656448   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.656508   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.656542   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.656573   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.656635   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.656704   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656740   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.818479   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.818494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.818581   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.878325   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:41:20.878480   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.878494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.878585   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.885061   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.885168   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.898628   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:41:20.994147   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:41:20.994175   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:41:20.994211   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:21.016210   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:41:21.016289   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:41:21.035051   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:41:21.374949   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:21.520726   69333 cache_images.go:92] duration metric: took 1.368058485s to LoadCachedImages
	W0927 01:41:21.520817   69333 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0927 01:41:21.520833   69333 kubeadm.go:934] updating node { 192.168.72.129 8443 v1.20.0 crio true true} ...
	I0927 01:41:21.520951   69333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-612261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:21.521035   69333 ssh_runner.go:195] Run: crio config
	I0927 01:41:21.571651   69333 cni.go:84] Creating CNI manager for ""
	I0927 01:41:21.571677   69333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:21.571688   69333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:21.571712   69333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-612261 NodeName:old-k8s-version-612261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:41:21.571882   69333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-612261"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:21.571958   69333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:41:21.582735   69333 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:21.582802   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:21.593329   69333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0927 01:41:21.615040   69333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:21.636564   69333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 01:41:21.657275   69333 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:21.661675   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:21.674587   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:21.814300   69333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:21.834133   69333 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261 for IP: 192.168.72.129
	I0927 01:41:21.834163   69333 certs.go:194] generating shared ca certs ...
	I0927 01:41:21.834182   69333 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:21.834380   69333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:21.834437   69333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:21.834450   69333 certs.go:256] generating profile certs ...
	I0927 01:41:21.834558   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key
	I0927 01:41:21.834630   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e
	I0927 01:41:21.834676   69333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key
	I0927 01:41:21.834819   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:21.834859   69333 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:21.834873   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:21.834904   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:21.834937   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:21.834973   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:21.835023   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:21.835864   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:21.866955   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:21.902991   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:21.928957   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:21.957505   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:41:21.984055   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:22.013191   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:22.041745   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:41:22.069680   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:22.104139   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:22.130348   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:22.157976   69333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:22.177818   69333 ssh_runner.go:195] Run: openssl version
	I0927 01:41:22.184389   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:22.196133   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201047   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201120   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.207245   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:22.219033   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:22.230331   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235000   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235054   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.240963   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:22.252022   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:22.263197   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268023   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268100   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.274086   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:22.285387   69333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:22.290487   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:22.296953   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:22.303095   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:22.310001   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:22.316346   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:22.322559   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:22.328931   69333 kubeadm.go:392] StartCluster: {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:22.329015   69333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:22.329081   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:18.498695   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499234   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:18.499187   70444 retry.go:31] will retry after 932.004828ms: waiting for machine to come up
	I0927 01:41:19.432487   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432885   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432912   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:19.432844   70444 retry.go:31] will retry after 1.595543978s: waiting for machine to come up
	I0927 01:41:21.030048   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030572   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030598   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:21.030526   70444 retry.go:31] will retry after 1.93010855s: waiting for machine to come up
	I0927 01:41:22.963833   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964303   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964334   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:22.964254   70444 retry.go:31] will retry after 2.81720725s: waiting for machine to come up
	I0927 01:41:21.757497   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:24.043965   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:22.368989   69333 cri.go:89] found id: ""
	I0927 01:41:22.369059   69333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:22.379818   69333 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:22.379841   69333 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:22.379897   69333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:22.392278   69333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:22.393236   69333 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:41:22.393856   69333 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-14935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-612261" cluster setting kubeconfig missing "old-k8s-version-612261" context setting]
	I0927 01:41:22.394733   69333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:22.404625   69333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:22.415376   69333 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.129
	I0927 01:41:22.415414   69333 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:22.415427   69333 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:22.415487   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:22.452749   69333 cri.go:89] found id: ""
	I0927 01:41:22.452829   69333 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:22.469164   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:22.480018   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:22.480038   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:22.480092   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:41:22.490501   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:22.490562   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:22.500330   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:41:22.509612   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:22.509681   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:22.520064   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.529864   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:22.529921   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.540563   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:41:22.556739   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:22.556797   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:22.572858   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:22.583366   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:22.709007   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.468461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.714890   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.865174   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.959048   69333 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:23.959140   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.460104   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.959462   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.460143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.959473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.460051   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.960121   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.784030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784429   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784456   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:25.784393   70444 retry.go:31] will retry after 2.844872797s: waiting for machine to come up
	I0927 01:41:26.544176   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:29.042297   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:27.459491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:27.959944   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.459636   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.959766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.459410   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.959439   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.460176   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.959810   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.459492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.959966   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.632445   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632905   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632930   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:28.632866   70444 retry.go:31] will retry after 3.566248996s: waiting for machine to come up
	I0927 01:41:32.200424   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200804   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Found IP for machine: 192.168.61.83
	I0927 01:41:32.200832   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has current primary IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200841   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserving static IP address...
	I0927 01:41:32.201137   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.201151   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserved static IP address: 192.168.61.83
	I0927 01:41:32.201164   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | skip adding static IP to network mk-default-k8s-diff-port-368295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"}
	I0927 01:41:32.201177   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Getting to WaitForSSH function...
	I0927 01:41:32.201185   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for SSH to be available...
	I0927 01:41:32.203258   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203542   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.203571   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203674   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH client type: external
	I0927 01:41:32.203704   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa (-rw-------)
	I0927 01:41:32.203743   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:32.203763   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | About to run SSH command:
	I0927 01:41:32.203783   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | exit 0
	I0927 01:41:32.327131   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:32.327499   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetConfigRaw
	I0927 01:41:32.328140   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.330387   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.330769   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.330801   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.331054   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:41:32.331257   69534 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:32.331279   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:32.331505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.333514   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333799   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.333825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333940   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.334101   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334267   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334359   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.334509   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.334700   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.334709   69534 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:32.439884   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:32.439921   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440126   69534 buildroot.go:166] provisioning hostname "default-k8s-diff-port-368295"
	I0927 01:41:32.440149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440346   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.443385   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443707   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.443742   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443917   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.444093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444266   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444427   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.444606   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.444793   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.444809   69534 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-368295 && echo "default-k8s-diff-port-368295" | sudo tee /etc/hostname
	I0927 01:41:32.570447   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-368295
	
	I0927 01:41:32.570479   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.573194   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573472   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.573512   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573699   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.573942   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574097   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.574430   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.574623   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.574647   69534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-368295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-368295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-368295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:32.693082   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:32.693107   69534 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:32.693140   69534 buildroot.go:174] setting up certificates
	I0927 01:41:32.693149   69534 provision.go:84] configureAuth start
	I0927 01:41:32.693160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.693407   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.696156   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696498   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.696522   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696693   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.698894   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699229   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.699257   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699399   69534 provision.go:143] copyHostCerts
	I0927 01:41:32.699451   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:32.699464   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:32.699530   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:32.699639   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:32.699653   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:32.699681   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:32.699751   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:32.699761   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:32.699785   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:32.699848   69534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-368295 san=[127.0.0.1 192.168.61.83 default-k8s-diff-port-368295 localhost minikube]
	I0927 01:41:32.887727   69534 provision.go:177] copyRemoteCerts
	I0927 01:41:32.887792   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:32.887825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.890435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890768   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.890797   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890956   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.891128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.891252   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.891373   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:32.973705   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:32.998434   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0927 01:41:33.023552   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:33.048884   69534 provision.go:87] duration metric: took 355.724209ms to configureAuth
	I0927 01:41:33.048910   69534 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:33.049080   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:33.049149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.051738   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052080   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.052133   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052364   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.052578   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052726   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.053031   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.053265   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.053283   69534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:33.292126   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:33.292148   69534 machine.go:96] duration metric: took 960.878234ms to provisionDockerMachine
	I0927 01:41:33.292159   69534 start.go:293] postStartSetup for "default-k8s-diff-port-368295" (driver="kvm2")
	I0927 01:41:33.292171   69534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:33.292188   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.292511   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:33.292539   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.295356   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.295759   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295936   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.296100   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.296314   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.296498   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.528391   68676 start.go:364] duration metric: took 56.042651871s to acquireMachinesLock for "no-preload-521072"
	I0927 01:41:33.528435   68676 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:33.528445   68676 fix.go:54] fixHost starting: 
	I0927 01:41:33.528858   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:33.528890   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:33.547391   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0927 01:41:33.547852   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:33.548343   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:41:33.548371   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:33.548713   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:33.548907   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:33.549064   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:41:33.550898   68676 fix.go:112] recreateIfNeeded on no-preload-521072: state=Stopped err=<nil>
	I0927 01:41:33.550923   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	W0927 01:41:33.551084   68676 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:33.553090   68676 out.go:177] * Restarting existing kvm2 VM for "no-preload-521072" ...
	I0927 01:41:33.554429   68676 main.go:141] libmachine: (no-preload-521072) Calling .Start
	I0927 01:41:33.554613   68676 main.go:141] libmachine: (no-preload-521072) Ensuring networks are active...
	I0927 01:41:33.555401   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network default is active
	I0927 01:41:33.555858   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network mk-no-preload-521072 is active
	I0927 01:41:33.556350   68676 main.go:141] libmachine: (no-preload-521072) Getting domain xml...
	I0927 01:41:33.557057   68676 main.go:141] libmachine: (no-preload-521072) Creating domain...
	I0927 01:41:34.830052   68676 main.go:141] libmachine: (no-preload-521072) Waiting to get IP...
	I0927 01:41:34.830807   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:34.831255   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:34.831340   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:34.831244   70637 retry.go:31] will retry after 267.615794ms: waiting for machine to come up
	I0927 01:41:33.378613   69534 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:33.383491   69534 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:33.383517   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:33.383590   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:33.383695   69534 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:33.383810   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:33.395134   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:33.420441   69534 start.go:296] duration metric: took 128.270045ms for postStartSetup
	I0927 01:41:33.420481   69534 fix.go:56] duration metric: took 19.711948387s for fixHost
	I0927 01:41:33.420505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.422860   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423170   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.423198   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423333   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.423517   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423676   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.423987   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.424139   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.424153   69534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:33.528250   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401293.484458762
	
	I0927 01:41:33.528271   69534 fix.go:216] guest clock: 1727401293.484458762
	I0927 01:41:33.528278   69534 fix.go:229] Guest: 2024-09-27 01:41:33.484458762 +0000 UTC Remote: 2024-09-27 01:41:33.420486926 +0000 UTC m=+225.118319167 (delta=63.971836ms)
	I0927 01:41:33.528297   69534 fix.go:200] guest clock delta is within tolerance: 63.971836ms
	I0927 01:41:33.528303   69534 start.go:83] releasing machines lock for "default-k8s-diff-port-368295", held for 19.819799777s
	I0927 01:41:33.528328   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.528623   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:33.531282   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531692   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.531724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531914   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532476   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532651   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532742   69534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:33.532784   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.532868   69534 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:33.532890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.535432   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535710   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.535843   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.536153   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536195   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536351   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536367   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536513   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536508   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.536634   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536815   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.644679   69534 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:33.652386   69534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:33.803821   69534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:33.810620   69534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:33.810678   69534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:33.826938   69534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:33.826963   69534 start.go:495] detecting cgroup driver to use...
	I0927 01:41:33.827028   69534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:33.844572   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:33.859851   69534 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:33.859916   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:33.874262   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:33.888460   69534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:34.011008   69534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:34.161761   69534 docker.go:233] disabling docker service ...
	I0927 01:41:34.161855   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:34.180621   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:34.198472   69534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:34.340892   69534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:34.483708   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:34.498745   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:34.518957   69534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:34.519026   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.530123   69534 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:34.530172   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.545035   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.555944   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.566852   69534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:34.577676   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.589078   69534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.608131   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.619482   69534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:34.629119   69534 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:34.629180   69534 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:34.643997   69534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:34.656396   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:34.791856   69534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:34.884774   69534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:34.884831   69534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:34.889590   69534 start.go:563] Will wait 60s for crictl version
	I0927 01:41:34.889633   69534 ssh_runner.go:195] Run: which crictl
	I0927 01:41:34.893330   69534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:34.930031   69534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:34.930141   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.960912   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.996060   69534 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:31.542525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.546389   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:32.459727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:32.959527   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.459351   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.959903   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.459444   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.959423   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.459435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.460148   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.959874   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.997457   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:35.000691   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001081   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:35.001127   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001322   69534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:35.006115   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:35.019817   69534 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:35.019983   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:35.020045   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:35.062533   69534 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:35.062595   69534 ssh_runner.go:195] Run: which lz4
	I0927 01:41:35.066897   69534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:35.071178   69534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:35.071216   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:41:36.563774   69534 crio.go:462] duration metric: took 1.496913722s to copy over tarball
	I0927 01:41:36.563866   69534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:35.100818   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.101327   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.101354   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.101290   70637 retry.go:31] will retry after 244.193758ms: waiting for machine to come up
	I0927 01:41:35.347021   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.347674   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.347714   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.347650   70637 retry.go:31] will retry after 361.672884ms: waiting for machine to come up
	I0927 01:41:35.711206   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.711755   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.711788   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.711730   70637 retry.go:31] will retry after 406.084841ms: waiting for machine to come up
	I0927 01:41:36.119494   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.120026   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.120067   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.119978   70637 retry.go:31] will retry after 497.966133ms: waiting for machine to come up
	I0927 01:41:36.619859   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.620400   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.620428   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.620362   70637 retry.go:31] will retry after 765.975603ms: waiting for machine to come up
	I0927 01:41:37.387821   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:37.388502   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:37.388537   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:37.388453   70637 retry.go:31] will retry after 828.567445ms: waiting for machine to come up
	I0927 01:41:38.218462   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:38.218940   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:38.218974   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:38.218803   70637 retry.go:31] will retry after 1.269155563s: waiting for machine to come up
	I0927 01:41:39.489076   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:39.489557   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:39.489583   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:39.489514   70637 retry.go:31] will retry after 1.666481574s: waiting for machine to come up
	I0927 01:41:35.554859   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:38.043285   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:40.542499   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:37.459766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:37.959594   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.459971   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.960093   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.459983   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.959812   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.460220   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.959253   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.459829   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.959864   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.667451   69534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10354947s)
	I0927 01:41:38.667477   69534 crio.go:469] duration metric: took 2.103669113s to extract the tarball
	I0927 01:41:38.667487   69534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:38.704217   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:38.747162   69534 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:41:38.747187   69534 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:41:38.747197   69534 kubeadm.go:934] updating node { 192.168.61.83 8444 v1.31.1 crio true true} ...
	I0927 01:41:38.747323   69534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-368295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:38.747406   69534 ssh_runner.go:195] Run: crio config
	I0927 01:41:38.796481   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:38.796510   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:38.796522   69534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:38.796549   69534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-368295 NodeName:default-k8s-diff-port-368295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:41:38.796726   69534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-368295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:38.796806   69534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:41:38.807445   69534 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:38.807513   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:38.817368   69534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0927 01:41:38.834181   69534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:38.851650   69534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0927 01:41:38.869822   69534 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:38.873868   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:38.886422   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:39.022075   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:39.038948   69534 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295 for IP: 192.168.61.83
	I0927 01:41:39.038982   69534 certs.go:194] generating shared ca certs ...
	I0927 01:41:39.039004   69534 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:39.039174   69534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:39.039241   69534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:39.039253   69534 certs.go:256] generating profile certs ...
	I0927 01:41:39.039402   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.key
	I0927 01:41:39.039490   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key.2edc0267
	I0927 01:41:39.039549   69534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key
	I0927 01:41:39.039701   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:39.039773   69534 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:39.039789   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:39.039825   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:39.039860   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:39.039889   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:39.039950   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:39.040814   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:39.080130   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:39.133365   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:39.169238   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:39.196619   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 01:41:39.227667   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:39.255240   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:39.280602   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:41:39.305695   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:39.329559   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:39.358555   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:39.387030   69534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:39.404111   69534 ssh_runner.go:195] Run: openssl version
	I0927 01:41:39.409879   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:39.420542   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425094   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425151   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.431225   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:39.442237   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:39.453229   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458040   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458110   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.464177   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:39.475582   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:39.486911   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491843   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491898   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.497653   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:39.508039   69534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:39.512597   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:39.518557   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:39.524475   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:39.530616   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:39.536820   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:39.543487   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:39.549791   69534 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:39.549880   69534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:39.549945   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.594178   69534 cri.go:89] found id: ""
	I0927 01:41:39.594256   69534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:39.605173   69534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:39.605195   69534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:39.605261   69534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:39.615543   69534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:39.616639   69534 kubeconfig.go:125] found "default-k8s-diff-port-368295" server: "https://192.168.61.83:8444"
	I0927 01:41:39.618793   69534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:39.628422   69534 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.83
	I0927 01:41:39.628454   69534 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:39.628465   69534 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:39.628566   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.673513   69534 cri.go:89] found id: ""
	I0927 01:41:39.673592   69534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:39.690296   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:39.699800   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:39.699821   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:39.699876   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:41:39.709235   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:39.709294   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:39.719012   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:41:39.728197   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:39.728262   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:39.737520   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.746592   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:39.746653   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.756251   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:41:39.765026   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:39.765090   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:39.774937   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:39.784588   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:39.893259   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.625162   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.954926   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.025693   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.101915   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:41.102006   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.602856   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.102942   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.602371   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.620056   69534 api_server.go:72] duration metric: took 1.518136259s to wait for apiserver process to appear ...
	I0927 01:41:42.620085   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:42.620107   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:41.157254   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:41.157789   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:41.157817   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:41.157738   70637 retry.go:31] will retry after 1.495421187s: waiting for machine to come up
	I0927 01:41:42.655326   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:42.655826   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:42.655853   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:42.655771   70637 retry.go:31] will retry after 2.80191937s: waiting for machine to come up
	I0927 01:41:42.543732   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.043009   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.040496   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.040525   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.040542   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.079569   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.079602   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.120702   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.126461   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.126488   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.621130   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.629533   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:45.629569   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.121189   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.130806   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:46.130842   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.620334   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.625456   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:41:46.636549   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:46.636581   69534 api_server.go:131] duration metric: took 4.016488114s to wait for apiserver health ...
	I0927 01:41:46.636591   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:46.636599   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:46.638016   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:42.459806   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.960200   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.459511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.959467   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.459352   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.960147   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.459637   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.959535   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.459585   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.959579   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.639222   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:46.651680   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:46.671366   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:46.684702   69534 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:46.684740   69534 system_pods.go:61] "coredns-7c65d6cfc9-xtgdx" [6a5f97bd-0fbb-4220-a763-bb8ca6fab439] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:46.684752   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [2dbd4866-89f2-4a0c-ab8a-671ff0237bf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:46.684761   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [62865280-e996-45a9-a872-766e09d5b91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:46.684774   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [b0d06bec-2f5a-46e4-9d2d-b2ea7cdc7968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:46.684781   69534 system_pods.go:61] "kube-proxy-xm2p8" [449495d5-a476-4abf-b6be-301b9ead92e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 01:41:46.684793   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [71dadb93-c535-4ce3-8dd7-ffd4496bf0e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:46.684801   69534 system_pods.go:61] "metrics-server-6867b74b74-n9nsg" [fefb6977-44af-41f8-8a82-1dcd76374ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:46.684811   69534 system_pods.go:61] "storage-provisioner" [78bd924c-1d70-4eb6-9e2c-0e21ebc523dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 01:41:46.684818   69534 system_pods.go:74] duration metric: took 13.431978ms to wait for pod list to return data ...
	I0927 01:41:46.684830   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:46.690309   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:46.690343   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:46.690358   69534 node_conditions.go:105] duration metric: took 5.522911ms to run NodePressure ...
	I0927 01:41:46.690379   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:46.964511   69534 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971731   69534 kubeadm.go:739] kubelet initialised
	I0927 01:41:46.971751   69534 kubeadm.go:740] duration metric: took 7.215476ms waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971760   69534 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:46.978192   69534 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:45.459706   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:45.460242   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:45.460265   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:45.460161   70637 retry.go:31] will retry after 3.051133432s: waiting for machine to come up
	I0927 01:41:48.512758   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:48.513180   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:48.513208   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:48.513118   70637 retry.go:31] will retry after 3.478053984s: waiting for machine to come up
	I0927 01:41:47.544064   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:50.042360   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.459645   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:47.959756   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.460088   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.959526   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.960102   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.460203   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.960225   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.460182   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.959343   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.985840   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:51.506449   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.484646   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:52.484672   69534 pod_ready.go:82] duration metric: took 5.506454681s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:52.484685   69534 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:51.994746   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995201   68676 main.go:141] libmachine: (no-preload-521072) Found IP for machine: 192.168.50.246
	I0927 01:41:51.995219   68676 main.go:141] libmachine: (no-preload-521072) Reserving static IP address...
	I0927 01:41:51.995230   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has current primary IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.995677   68676 main.go:141] libmachine: (no-preload-521072) Reserved static IP address: 192.168.50.246
	I0927 01:41:51.995695   68676 main.go:141] libmachine: (no-preload-521072) DBG | skip adding static IP to network mk-no-preload-521072 - found existing host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"}
	I0927 01:41:51.995713   68676 main.go:141] libmachine: (no-preload-521072) DBG | Getting to WaitForSSH function...
	I0927 01:41:51.995727   68676 main.go:141] libmachine: (no-preload-521072) Waiting for SSH to be available...
	I0927 01:41:51.998245   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998590   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.998616   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998748   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH client type: external
	I0927 01:41:51.998810   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa (-rw-------)
	I0927 01:41:51.998850   68676 main.go:141] libmachine: (no-preload-521072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:51.998866   68676 main.go:141] libmachine: (no-preload-521072) DBG | About to run SSH command:
	I0927 01:41:51.998877   68676 main.go:141] libmachine: (no-preload-521072) DBG | exit 0
	I0927 01:41:52.131754   68676 main.go:141] libmachine: (no-preload-521072) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:52.132117   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetConfigRaw
	I0927 01:41:52.132724   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.135236   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135588   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.135615   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135866   68676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/config.json ...
	I0927 01:41:52.136059   68676 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:52.136078   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:52.136300   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.138644   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139009   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.139035   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139215   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.139406   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139602   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139760   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.139931   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.140139   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.140151   68676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:52.255655   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:52.255690   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.255952   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:41:52.255968   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.256122   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.258599   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.258963   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.258994   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.259108   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.259322   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259494   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259676   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.259835   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.260008   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.260023   68676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-521072 && echo "no-preload-521072" | sudo tee /etc/hostname
	I0927 01:41:52.405255   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-521072
	
	I0927 01:41:52.405314   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.408593   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.408927   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.408973   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.409346   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.409591   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409786   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409940   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.410094   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.410331   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.410356   68676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521072/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:52.538244   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:52.538276   68676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:52.538321   68676 buildroot.go:174] setting up certificates
	I0927 01:41:52.538335   68676 provision.go:84] configureAuth start
	I0927 01:41:52.538350   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.538644   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.541913   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542334   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.542372   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542540   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.544773   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545127   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.545163   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545357   68676 provision.go:143] copyHostCerts
	I0927 01:41:52.545415   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:52.545427   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:52.545496   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:52.545614   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:52.545624   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:52.545655   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:52.545732   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:52.545742   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:52.545768   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:52.545834   68676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.no-preload-521072 san=[127.0.0.1 192.168.50.246 localhost minikube no-preload-521072]
	I0927 01:41:52.738375   68676 provision.go:177] copyRemoteCerts
	I0927 01:41:52.738434   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:52.738459   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.741146   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741439   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.741456   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741630   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.741828   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.741961   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.742086   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:52.830330   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:52.854664   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:41:52.879246   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:52.902734   68676 provision.go:87] duration metric: took 364.385528ms to configureAuth
	I0927 01:41:52.902782   68676 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:52.903017   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:52.903109   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.906143   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906495   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.906526   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906699   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.906917   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907086   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907211   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.907426   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.907625   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.907640   68676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:53.162936   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:53.162960   68676 machine.go:96] duration metric: took 1.026891152s to provisionDockerMachine
	I0927 01:41:53.162971   68676 start.go:293] postStartSetup for "no-preload-521072" (driver="kvm2")
	I0927 01:41:53.162980   68676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:53.162994   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.163325   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:53.163360   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.166007   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166478   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.166516   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166726   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.166919   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.167103   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.167253   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.254620   68676 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:53.259139   68676 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:53.259160   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:53.259236   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:53.259341   68676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:53.259465   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:53.269711   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:53.294563   68676 start.go:296] duration metric: took 131.58032ms for postStartSetup
	I0927 01:41:53.294602   68676 fix.go:56] duration metric: took 19.766156729s for fixHost
	I0927 01:41:53.294626   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.297597   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.297897   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.297928   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.298092   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.298275   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298632   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.298821   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:53.298997   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:53.299010   68676 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:53.416459   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401313.370238189
	
	I0927 01:41:53.416488   68676 fix.go:216] guest clock: 1727401313.370238189
	I0927 01:41:53.416497   68676 fix.go:229] Guest: 2024-09-27 01:41:53.370238189 +0000 UTC Remote: 2024-09-27 01:41:53.294607439 +0000 UTC m=+358.400757430 (delta=75.63075ms)
	I0927 01:41:53.416521   68676 fix.go:200] guest clock delta is within tolerance: 75.63075ms
	I0927 01:41:53.416542   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 19.888127741s
	I0927 01:41:53.416581   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.416835   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:53.419800   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420124   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.420153   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420309   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420730   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420905   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420988   68676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:53.421036   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.421126   68676 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:53.421148   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.423529   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423882   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423916   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.423937   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424023   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424308   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424365   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.424412   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424464   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.424567   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424701   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424838   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424990   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.527586   68676 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:53.533685   68676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:53.680850   68676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:53.686769   68676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:53.686831   68676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:53.702686   68676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:53.702709   68676 start.go:495] detecting cgroup driver to use...
	I0927 01:41:53.702787   68676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:53.720756   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:53.736843   68676 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:53.736920   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:53.752063   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:53.768140   68676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:53.890040   68676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:54.044033   68676 docker.go:233] disabling docker service ...
	I0927 01:41:54.044100   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:54.060061   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:54.073201   68676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:54.225559   68676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:54.367269   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:54.381517   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:54.401099   68676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:54.401164   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.412620   68676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:54.412687   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.425942   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.437451   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.449115   68676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:54.460383   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.471393   68676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.489649   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.500699   68676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:54.511012   68676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:54.511061   68676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:54.524738   68676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:54.535353   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:54.672416   68676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:54.763423   68676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:54.763506   68676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:54.768758   68676 start.go:563] Will wait 60s for crictl version
	I0927 01:41:54.768823   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:54.772980   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:54.814375   68676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:54.814460   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.844002   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.876692   68676 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:54.877765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:54.880320   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.880817   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:54.880852   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.881008   68676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:54.885225   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:54.897661   68676 kubeadm.go:883] updating cluster {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:54.897768   68676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:54.897810   68676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:52.542326   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.543472   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.459589   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:52.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.459448   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.960120   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.460016   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.959681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.959819   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.459221   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.959296   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.491390   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.997932   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.937979   68676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:54.938000   68676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:54.938055   68676 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.938103   68676 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.938124   68676 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.938101   68676 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.938180   68676 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:54.938069   68676 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939611   68676 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.939853   68676 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.939867   68676 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.939872   68676 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.939875   68676 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.939868   68676 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939932   68676 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0927 01:41:54.939954   68676 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.100149   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.104432   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.122220   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0927 01:41:55.146745   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.148808   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.159749   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.194662   68676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0927 01:41:55.194710   68676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.194764   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.218262   68676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0927 01:41:55.218302   68676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.218348   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.275530   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339428   68676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0927 01:41:55.339476   68676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.339488   68676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0927 01:41:55.339526   68676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.339554   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339558   68676 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0927 01:41:55.339569   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339573   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.339584   68676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.339619   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339625   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.339689   68676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0927 01:41:55.339733   68676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339772   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.392986   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.393033   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.403596   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.403658   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.403601   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.404180   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.528983   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.529008   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.529013   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.556122   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.556146   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.559222   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.668914   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0927 01:41:55.669041   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.671951   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.672026   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.675810   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0927 01:41:55.675854   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.675883   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.675910   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:55.687199   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0927 01:41:55.687234   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.687294   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.766777   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0927 01:41:55.766775   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0927 01:41:55.766894   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:41:55.766901   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:41:55.776811   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0927 01:41:55.776824   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0927 01:41:55.776933   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0927 01:41:55.777033   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:55.776938   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:41:56.125882   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825382   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.048325373s)
	I0927 01:41:57.825460   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0927 01:41:57.825396   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.048309349s)
	I0927 01:41:57.825483   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0927 01:41:57.825401   68676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.699485021s)
	I0927 01:41:57.825517   68676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0927 01:41:57.825520   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.138185505s)
	I0927 01:41:57.825540   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0927 01:41:57.825548   68676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825411   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.058505151s)
	I0927 01:41:57.825566   68676 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:57.825573   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0927 01:41:57.825414   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.058497946s)
	I0927 01:41:57.825584   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0927 01:41:57.825596   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:57.825613   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:59.788391   68676 ssh_runner.go:235] Completed: which crictl: (1.962775321s)
	I0927 01:41:59.788412   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.962779963s)
	I0927 01:41:59.788429   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0927 01:41:59.788457   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:59.788462   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:59.788499   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:57.043267   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.542589   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:57.459172   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:57.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.459323   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.960219   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.459916   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.959858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.460249   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.959246   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.459839   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.959224   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.490443   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.992727   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.992753   69534 pod_ready.go:82] duration metric: took 7.508057707s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.992766   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998326   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.998357   69534 pod_ready.go:82] duration metric: took 5.584215ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998372   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003176   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.003197   69534 pod_ready.go:82] duration metric: took 4.816939ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003209   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009089   69534 pod_ready.go:93] pod "kube-proxy-xm2p8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.009110   69534 pod_ready.go:82] duration metric: took 5.893939ms for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009119   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014172   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.014197   69534 pod_ready.go:82] duration metric: took 5.072107ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014209   69534 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:02.021372   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.758278   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969794291s)
	I0927 01:42:01.758369   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:01.758392   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.969869427s)
	I0927 01:42:01.758415   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0927 01:42:01.758445   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.758494   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.796910   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:03.934871   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.176354046s)
	I0927 01:42:03.934903   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0927 01:42:03.934921   68676 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.934927   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.137986898s)
	I0927 01:42:03.934972   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 01:42:03.934994   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.935050   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:03.939942   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0927 01:42:02.042617   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:04.042848   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:02.460232   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:02.959635   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.459610   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.959412   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.459857   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.959495   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.459972   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.959931   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.459460   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.959627   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.021759   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:06.521921   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.308972   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373952677s)
	I0927 01:42:07.308999   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0927 01:42:07.309024   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:07.309070   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:09.378517   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.06942074s)
	I0927 01:42:09.378550   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0927 01:42:09.378579   68676 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:09.378629   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:06.546731   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:09.044481   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.459395   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:07.959574   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.460234   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.959281   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.459240   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.959429   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.459865   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.959431   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.459459   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.020456   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:11.021689   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:10.030049   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 01:42:10.030100   68676 cache_images.go:123] Successfully loaded all cached images
	I0927 01:42:10.030106   68676 cache_images.go:92] duration metric: took 15.09209404s to LoadCachedImages
	I0927 01:42:10.030118   68676 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.31.1 crio true true} ...
	I0927 01:42:10.030211   68676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-521072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:42:10.030273   68676 ssh_runner.go:195] Run: crio config
	I0927 01:42:10.078318   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:10.078342   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:10.078351   68676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:42:10.078370   68676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521072 NodeName:no-preload-521072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:42:10.078506   68676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521072"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:42:10.078580   68676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:42:10.089137   68676 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:42:10.089212   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:42:10.098310   68676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 01:42:10.116172   68676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:42:10.134642   68676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0927 01:42:10.152442   68676 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0927 01:42:10.156477   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:42:10.169007   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:10.288382   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:10.306047   68676 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072 for IP: 192.168.50.246
	I0927 01:42:10.306077   68676 certs.go:194] generating shared ca certs ...
	I0927 01:42:10.306096   68676 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:10.306276   68676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:42:10.306331   68676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:42:10.306350   68676 certs.go:256] generating profile certs ...
	I0927 01:42:10.306453   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/client.key
	I0927 01:42:10.306553   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key.735097eb
	I0927 01:42:10.306613   68676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key
	I0927 01:42:10.306761   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:42:10.306797   68676 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:42:10.306808   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:42:10.306833   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:42:10.306854   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:42:10.306878   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:42:10.306916   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:42:10.307598   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:42:10.344570   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:42:10.386834   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:42:10.432022   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:42:10.462348   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:42:10.490015   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:42:10.518144   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:42:10.545290   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:42:10.572460   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:42:10.597526   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:42:10.622287   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:42:10.646020   68676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:42:10.662972   68676 ssh_runner.go:195] Run: openssl version
	I0927 01:42:10.668844   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:42:10.680020   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684620   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684678   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.690694   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:42:10.702115   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:42:10.713424   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717918   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717971   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.723601   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:42:10.734870   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:42:10.747370   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752016   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752072   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.757964   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:42:10.769560   68676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:42:10.774457   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:42:10.780719   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:42:10.786653   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:42:10.792671   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:42:10.798674   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:42:10.804910   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:42:10.811007   68676 kubeadm.go:392] StartCluster: {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:42:10.811114   68676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:42:10.811178   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.851017   68676 cri.go:89] found id: ""
	I0927 01:42:10.851084   68676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:42:10.864997   68676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:42:10.865016   68676 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:42:10.865062   68676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:42:10.877088   68676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:42:10.878133   68676 kubeconfig.go:125] found "no-preload-521072" server: "https://192.168.50.246:8443"
	I0927 01:42:10.880637   68676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:42:10.893554   68676 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.246
	I0927 01:42:10.893578   68676 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:42:10.893592   68676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:42:10.893629   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.935734   68676 cri.go:89] found id: ""
	I0927 01:42:10.935794   68676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:42:10.954141   68676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:42:10.965345   68676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:42:10.965363   68676 kubeadm.go:157] found existing configuration files:
	
	I0927 01:42:10.965413   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:42:10.975561   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:42:10.975628   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:42:10.985747   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:42:10.995026   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:42:10.995089   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:42:11.006650   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.016964   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:42:11.017034   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.028756   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:42:11.039002   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:42:11.039072   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:42:11.050382   68676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:42:11.060839   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:11.177447   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.481118   68676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.303633907s)
	I0927 01:42:12.481149   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.706344   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.774938   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.866467   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:42:12.866552   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.366860   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.866951   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.882411   68676 api_server.go:72] duration metric: took 1.015943274s to wait for apiserver process to appear ...
	I0927 01:42:13.882435   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:42:13.882457   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:13.882963   68676 api_server.go:269] stopped: https://192.168.50.246:8443/healthz: Get "https://192.168.50.246:8443/healthz": dial tcp 192.168.50.246:8443: connect: connection refused
	I0927 01:42:14.382489   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:11.543818   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:14.042536   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:12.459771   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:12.959727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.459428   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.959255   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.460003   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.959853   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.460237   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.959974   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.459420   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.959321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.527793   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:16.023080   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.124839   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:42:17.124867   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:42:17.124885   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.174869   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.174905   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.383128   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.389594   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.389629   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.883197   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.888706   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.888734   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.382982   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.387847   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.387877   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.882844   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.887144   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.887178   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.382711   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.388007   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.388037   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.882613   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.886781   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.886801   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:20.382907   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:20.387083   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:42:20.393697   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:42:20.393725   68676 api_server.go:131] duration metric: took 6.511280572s to wait for apiserver health ...
	I0927 01:42:20.393735   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:20.393743   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:20.395270   68676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:42:16.543525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:19.041726   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:20.396770   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:42:20.407891   68676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:42:20.427815   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:42:20.436940   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:42:20.436980   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:42:20.436989   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:42:20.436997   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:42:20.437005   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:42:20.437012   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:42:20.437020   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:42:20.437032   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:42:20.437038   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:42:20.437049   68676 system_pods.go:74] duration metric: took 9.213874ms to wait for pod list to return data ...
	I0927 01:42:20.437057   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:42:20.440323   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:42:20.440345   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:42:20.440356   68676 node_conditions.go:105] duration metric: took 3.294768ms to run NodePressure ...
	I0927 01:42:20.440372   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:20.710186   68676 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713940   68676 kubeadm.go:739] kubelet initialised
	I0927 01:42:20.713958   68676 kubeadm.go:740] duration metric: took 3.749241ms waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713965   68676 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:20.718807   68676 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.722955   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722976   68676 pod_ready.go:82] duration metric: took 4.147896ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.722984   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722991   68676 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.727569   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727596   68676 pod_ready.go:82] duration metric: took 4.598426ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.727604   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727611   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.731845   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731871   68676 pod_ready.go:82] duration metric: took 4.25326ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.731881   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731889   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.830881   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830909   68676 pod_ready.go:82] duration metric: took 99.009569ms for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.830918   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830923   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.232434   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232463   68676 pod_ready.go:82] duration metric: took 401.530413ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.232473   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232485   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.630791   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630818   68676 pod_ready.go:82] duration metric: took 398.325039ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.630829   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630836   68676 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:22.032173   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032200   68676 pod_ready.go:82] duration metric: took 401.353533ms for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:22.032208   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032215   68676 pod_ready.go:39] duration metric: took 1.318241972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:22.032233   68676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:42:22.046872   68676 ops.go:34] apiserver oom_adj: -16
	I0927 01:42:22.046898   68676 kubeadm.go:597] duration metric: took 11.181875532s to restartPrimaryControlPlane
	I0927 01:42:22.046908   68676 kubeadm.go:394] duration metric: took 11.235909243s to StartCluster
	I0927 01:42:22.046923   68676 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.046984   68676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:42:22.048611   68676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.048864   68676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:42:22.048932   68676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:42:22.049029   68676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-521072"
	I0927 01:42:22.049050   68676 addons.go:234] Setting addon storage-provisioner=true in "no-preload-521072"
	W0927 01:42:22.049060   68676 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:42:22.049066   68676 addons.go:69] Setting default-storageclass=true in profile "no-preload-521072"
	I0927 01:42:22.049088   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049092   68676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521072"
	I0927 01:42:22.049096   68676 addons.go:69] Setting metrics-server=true in profile "no-preload-521072"
	I0927 01:42:22.049117   68676 addons.go:234] Setting addon metrics-server=true in "no-preload-521072"
	I0927 01:42:22.049123   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0927 01:42:22.049134   68676 addons.go:243] addon metrics-server should already be in state true
	I0927 01:42:22.049167   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049423   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049455   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049478   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049507   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049535   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049555   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.050564   68676 out.go:177] * Verifying Kubernetes components...
	I0927 01:42:22.051717   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:22.088020   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0927 01:42:22.088454   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.088964   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.088985   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.089333   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.089793   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.089825   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.091735   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I0927 01:42:22.091853   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0927 01:42:22.092236   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092295   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092659   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092677   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.092817   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092840   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.093170   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093344   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093387   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.093922   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.093949   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.097310   68676 addons.go:234] Setting addon default-storageclass=true in "no-preload-521072"
	W0927 01:42:22.097333   68676 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:42:22.097368   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.097705   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.097747   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.110628   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0927 01:42:22.111053   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.111604   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.111629   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.112113   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.112329   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.113354   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0927 01:42:22.114009   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.114749   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.115666   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.115690   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.116105   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.116374   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.116862   68676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:42:22.118124   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.118135   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:42:22.118162   68676 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:42:22.118180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.119866   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0927 01:42:22.120317   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.120908   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.120931   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.121113   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.121319   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.121556   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.121576   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.122025   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.122051   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.122280   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.122487   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.122652   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.122781   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.126076   68676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:17.459443   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:17.959426   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.460250   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.959989   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.459981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.959969   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.459758   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.959440   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.460115   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.959238   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.521751   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:21.020226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.021393   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.127430   68676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.127446   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:42:22.127460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.130498   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131040   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.131061   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131357   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.131544   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.131670   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.131997   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.138657   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0927 01:42:22.138987   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.139420   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.139438   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.139824   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.139998   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.141454   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.141664   68676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.141673   68676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:42:22.141683   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.144221   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.144670   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.144931   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.145071   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.145208   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.244289   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:22.261345   68676 node_ready.go:35] waiting up to 6m0s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:22.365923   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:42:22.365953   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:42:22.387392   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.389353   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:42:22.389379   68676 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:42:22.406994   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.491559   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:22.491581   68676 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:42:22.586476   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:23.660676   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273241029s)
	I0927 01:42:23.660733   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660750   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660732   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.253706672s)
	I0927 01:42:23.660831   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660841   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660851   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.074315804s)
	I0927 01:42:23.661081   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661098   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661109   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661108   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661118   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661153   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661205   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661161   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661223   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661230   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661238   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661125   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661607   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661608   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661621   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661631   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661632   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661637   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661641   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661645   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661649   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661650   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661653   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661852   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661866   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661874   68676 addons.go:475] Verifying addon metrics-server=true in "no-preload-521072"
	I0927 01:42:23.661917   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.668484   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.668499   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.668711   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.668726   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.668743   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.670758   68676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0927 01:42:23.672072   68676 addons.go:510] duration metric: took 1.62313879s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0927 01:42:24.265426   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.043831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:25.546335   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.460161   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:22.959177   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.459481   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.959221   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:23.959322   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:24.004970   69333 cri.go:89] found id: ""
	I0927 01:42:24.004999   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.005010   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:24.005017   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:24.005076   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:24.041880   69333 cri.go:89] found id: ""
	I0927 01:42:24.041908   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.041919   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:24.041926   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:24.041991   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:24.082295   69333 cri.go:89] found id: ""
	I0927 01:42:24.082318   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.082325   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:24.082331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:24.082385   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:24.119663   69333 cri.go:89] found id: ""
	I0927 01:42:24.119692   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.119707   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:24.119714   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:24.119771   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:24.163893   69333 cri.go:89] found id: ""
	I0927 01:42:24.163920   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.163932   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:24.163940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:24.163999   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:24.200277   69333 cri.go:89] found id: ""
	I0927 01:42:24.200299   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.200307   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:24.200312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:24.200365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:24.235039   69333 cri.go:89] found id: ""
	I0927 01:42:24.235059   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.235066   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:24.235072   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:24.235132   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:24.275160   69333 cri.go:89] found id: ""
	I0927 01:42:24.275181   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.275188   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:24.275196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:24.275206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:24.327432   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:24.327465   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:24.341113   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:24.341139   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:24.473741   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:24.473764   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:24.473779   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:24.545888   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:24.545923   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:27.086673   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:27.100552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:27.100623   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:27.136182   69333 cri.go:89] found id: ""
	I0927 01:42:27.136207   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.136215   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:27.136221   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:27.136289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:27.173258   69333 cri.go:89] found id: ""
	I0927 01:42:27.173285   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.173296   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:27.173303   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:27.173373   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:27.210481   69333 cri.go:89] found id: ""
	I0927 01:42:27.210514   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.210526   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:27.210533   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:27.210586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:27.245168   69333 cri.go:89] found id: ""
	I0927 01:42:27.245192   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.245200   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:27.245206   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:27.245252   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:27.280494   69333 cri.go:89] found id: ""
	I0927 01:42:27.280522   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.280531   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:27.280538   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:27.280596   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:27.314281   69333 cri.go:89] found id: ""
	I0927 01:42:27.314307   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.314316   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:27.314322   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:27.314392   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:25.521413   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.019989   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:26.764721   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:27.765574   68676 node_ready.go:49] node "no-preload-521072" has status "Ready":"True"
	I0927 01:42:27.765597   68676 node_ready.go:38] duration metric: took 5.504217374s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:27.765609   68676 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:27.772263   68676 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777521   68676 pod_ready.go:93] pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.777544   68676 pod_ready.go:82] duration metric: took 5.252259ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777552   68676 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781511   68676 pod_ready.go:93] pod "etcd-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.781528   68676 pod_ready.go:82] duration metric: took 3.970559ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781535   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785556   68676 pod_ready.go:93] pod "kube-apiserver-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.785572   68676 pod_ready.go:82] duration metric: took 4.032023ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785579   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:29.792899   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.041166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:30.041766   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:27.350838   69333 cri.go:89] found id: ""
	I0927 01:42:27.350861   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.350869   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:27.350874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:27.350921   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:27.390146   69333 cri.go:89] found id: ""
	I0927 01:42:27.390175   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.390186   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:27.390196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:27.390206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:27.446727   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:27.446756   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:27.461337   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:27.461365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:27.533818   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:27.533839   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:27.533874   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:27.614325   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:27.614357   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.161303   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:30.179521   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:30.179590   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:30.221738   69333 cri.go:89] found id: ""
	I0927 01:42:30.221764   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.221772   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:30.221778   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:30.221841   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:30.258316   69333 cri.go:89] found id: ""
	I0927 01:42:30.258349   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.258359   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:30.258369   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:30.258427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:30.297079   69333 cri.go:89] found id: ""
	I0927 01:42:30.297102   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.297109   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:30.297114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:30.297159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:30.337969   69333 cri.go:89] found id: ""
	I0927 01:42:30.337995   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.338007   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:30.338014   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:30.338075   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:30.375946   69333 cri.go:89] found id: ""
	I0927 01:42:30.375975   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.375986   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:30.375993   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:30.376054   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:30.411673   69333 cri.go:89] found id: ""
	I0927 01:42:30.411700   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.411710   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:30.411718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:30.411765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:30.447784   69333 cri.go:89] found id: ""
	I0927 01:42:30.447812   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.447822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:30.447830   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:30.447890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:30.483164   69333 cri.go:89] found id: ""
	I0927 01:42:30.483191   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.483202   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:30.483213   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:30.483229   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:30.533490   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:30.533522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:30.547688   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:30.547722   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:30.626696   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:30.626720   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:30.626733   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:30.708767   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:30.708809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.020786   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.021243   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.292370   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.791420   68676 pod_ready.go:93] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.791444   68676 pod_ready.go:82] duration metric: took 5.00585892s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.791454   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796509   68676 pod_ready.go:93] pod "kube-proxy-wkcb8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.796528   68676 pod_ready.go:82] duration metric: took 5.067798ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796536   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801041   68676 pod_ready.go:93] pod "kube-scheduler-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.801066   68676 pod_ready.go:82] duration metric: took 4.523416ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801087   68676 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:34.807359   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.042216   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:34.541390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:33.250034   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:33.263733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:33.263805   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:33.298038   69333 cri.go:89] found id: ""
	I0927 01:42:33.298063   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.298071   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:33.298077   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:33.298139   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:33.338027   69333 cri.go:89] found id: ""
	I0927 01:42:33.338050   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.338058   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:33.338064   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:33.338118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:33.376470   69333 cri.go:89] found id: ""
	I0927 01:42:33.376496   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.376504   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:33.376509   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:33.376568   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:33.419831   69333 cri.go:89] found id: ""
	I0927 01:42:33.419859   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.419868   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:33.419874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:33.419929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:33.461029   69333 cri.go:89] found id: ""
	I0927 01:42:33.461057   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.461076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:33.461085   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:33.461158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:33.499968   69333 cri.go:89] found id: ""
	I0927 01:42:33.499996   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.500007   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:33.500015   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:33.500073   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:33.552601   69333 cri.go:89] found id: ""
	I0927 01:42:33.552625   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.552633   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:33.552640   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:33.552702   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:33.589491   69333 cri.go:89] found id: ""
	I0927 01:42:33.589520   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.589529   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:33.589540   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:33.589554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:33.643437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:33.643470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:33.657819   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:33.657846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:33.728369   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:33.728393   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:33.728407   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:33.803661   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:33.803691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:36.343598   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:36.357879   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:36.357937   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:36.398936   69333 cri.go:89] found id: ""
	I0927 01:42:36.398958   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.398966   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:36.398971   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:36.399016   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:36.438897   69333 cri.go:89] found id: ""
	I0927 01:42:36.438921   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.438928   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:36.438935   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:36.438979   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:36.476779   69333 cri.go:89] found id: ""
	I0927 01:42:36.476807   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.476817   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:36.476824   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:36.476882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:36.514216   69333 cri.go:89] found id: ""
	I0927 01:42:36.514238   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.514245   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:36.514251   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:36.514306   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:36.551800   69333 cri.go:89] found id: ""
	I0927 01:42:36.551827   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.551835   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:36.551841   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:36.551900   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:36.592060   69333 cri.go:89] found id: ""
	I0927 01:42:36.592086   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.592096   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:36.592101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:36.592172   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:36.633485   69333 cri.go:89] found id: ""
	I0927 01:42:36.633507   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.633514   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:36.633519   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:36.633571   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:36.667288   69333 cri.go:89] found id: ""
	I0927 01:42:36.667355   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.667366   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:36.667377   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:36.667391   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:36.722230   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:36.722263   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:36.735927   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:36.735952   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:36.808852   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:36.808872   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:36.808887   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:36.889259   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:36.889299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:34.520143   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.521254   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.808388   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.308743   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.542085   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.042119   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.438818   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:39.459082   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:39.459150   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:39.499966   69333 cri.go:89] found id: ""
	I0927 01:42:39.499991   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.499999   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:39.500004   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:39.500050   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:39.540828   69333 cri.go:89] found id: ""
	I0927 01:42:39.540850   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.540857   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:39.540864   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:39.540972   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:39.575841   69333 cri.go:89] found id: ""
	I0927 01:42:39.575868   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.575879   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:39.575886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:39.575958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:39.611105   69333 cri.go:89] found id: ""
	I0927 01:42:39.611184   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.611202   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:39.611212   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:39.611268   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:39.644772   69333 cri.go:89] found id: ""
	I0927 01:42:39.644800   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.644808   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:39.644813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:39.644868   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:39.679875   69333 cri.go:89] found id: ""
	I0927 01:42:39.679901   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.679912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:39.679919   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:39.679987   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:39.716410   69333 cri.go:89] found id: ""
	I0927 01:42:39.716440   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.716450   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:39.716457   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:39.716525   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:39.750391   69333 cri.go:89] found id: ""
	I0927 01:42:39.750418   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.750428   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:39.750439   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:39.750455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:39.822365   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:39.822401   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:39.822416   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:39.905982   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:39.906017   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:39.952310   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:39.952339   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:40.000523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:40.000554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:39.021945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.519787   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.807532   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:44.307548   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.042260   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:43.042762   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.542112   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:42.514379   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:42.528312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:42.528377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:42.562427   69333 cri.go:89] found id: ""
	I0927 01:42:42.562455   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.562463   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:42.562469   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:42.562526   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:42.599969   69333 cri.go:89] found id: ""
	I0927 01:42:42.599993   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.600002   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:42.600007   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:42.600053   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:42.636338   69333 cri.go:89] found id: ""
	I0927 01:42:42.636364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.636371   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:42.636376   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:42.636431   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:42.670781   69333 cri.go:89] found id: ""
	I0927 01:42:42.670809   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.670818   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:42.670823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:42.670880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:42.707334   69333 cri.go:89] found id: ""
	I0927 01:42:42.707364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.707375   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:42.707431   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:42.707503   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:42.743063   69333 cri.go:89] found id: ""
	I0927 01:42:42.743092   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.743103   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:42.743139   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:42.743192   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:42.778593   69333 cri.go:89] found id: ""
	I0927 01:42:42.778617   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.778628   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:42.778634   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:42.778700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:42.814261   69333 cri.go:89] found id: ""
	I0927 01:42:42.814286   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.814293   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:42.814300   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:42.814310   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:42.863982   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:42.864011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:42.877151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:42.877175   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:42.959233   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:42.959251   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:42.959262   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:43.038773   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:43.038805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:45.581272   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:45.596103   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:45.596167   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:45.639507   69333 cri.go:89] found id: ""
	I0927 01:42:45.639531   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.639539   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:45.639545   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:45.639611   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:45.678455   69333 cri.go:89] found id: ""
	I0927 01:42:45.678482   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.678489   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:45.678495   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:45.678539   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:45.722094   69333 cri.go:89] found id: ""
	I0927 01:42:45.722123   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.722135   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:45.722142   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:45.722211   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:45.758091   69333 cri.go:89] found id: ""
	I0927 01:42:45.758118   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.758127   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:45.758133   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:45.758183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:45.792976   69333 cri.go:89] found id: ""
	I0927 01:42:45.793010   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.793021   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:45.793028   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:45.793089   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:45.830235   69333 cri.go:89] found id: ""
	I0927 01:42:45.830262   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.830273   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:45.830280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:45.830324   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:45.865896   69333 cri.go:89] found id: ""
	I0927 01:42:45.865928   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.865938   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:45.865946   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:45.866000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:45.900058   69333 cri.go:89] found id: ""
	I0927 01:42:45.900088   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.900099   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:45.900108   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:45.900119   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:45.972986   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:45.973015   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:45.973030   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:46.048703   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:46.048732   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:46.087483   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:46.087515   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:46.136833   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:46.136866   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:43.520998   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.522532   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.020912   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:46.307637   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.808963   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.041757   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:50.042259   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.650738   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:48.665847   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:48.665930   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:48.704304   69333 cri.go:89] found id: ""
	I0927 01:42:48.704328   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.704337   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:48.704342   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:48.704402   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:48.742469   69333 cri.go:89] found id: ""
	I0927 01:42:48.742499   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.742510   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:48.742517   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:48.742579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:48.782154   69333 cri.go:89] found id: ""
	I0927 01:42:48.782183   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.782194   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:48.782201   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:48.782261   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:48.821686   69333 cri.go:89] found id: ""
	I0927 01:42:48.821709   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.821717   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:48.821723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:48.821781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:48.867072   69333 cri.go:89] found id: ""
	I0927 01:42:48.867099   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.867109   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:48.867123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:48.867191   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:48.908215   69333 cri.go:89] found id: ""
	I0927 01:42:48.908241   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.908249   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:48.908255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:48.908312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:48.945260   69333 cri.go:89] found id: ""
	I0927 01:42:48.945291   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.945303   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:48.945310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:48.945375   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:48.983285   69333 cri.go:89] found id: ""
	I0927 01:42:48.983325   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.983333   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:48.983343   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:48.983354   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:49.039437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:49.039472   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:49.053546   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:49.053571   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:49.129264   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:49.129286   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:49.129299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:49.216967   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:49.216999   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:51.758143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:51.771417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:51.771485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:51.806120   69333 cri.go:89] found id: ""
	I0927 01:42:51.806144   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.806154   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:51.806161   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:51.806219   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:51.840301   69333 cri.go:89] found id: ""
	I0927 01:42:51.840330   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.840340   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:51.840348   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:51.840410   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:51.874908   69333 cri.go:89] found id: ""
	I0927 01:42:51.874934   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.874944   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:51.874952   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:51.875018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:51.910960   69333 cri.go:89] found id: ""
	I0927 01:42:51.910988   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.910999   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:51.911006   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:51.911064   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:51.945206   69333 cri.go:89] found id: ""
	I0927 01:42:51.945228   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.945236   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:51.945241   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:51.945289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:51.979262   69333 cri.go:89] found id: ""
	I0927 01:42:51.979296   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.979322   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:51.979328   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:51.979384   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:52.013407   69333 cri.go:89] found id: ""
	I0927 01:42:52.013438   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.013449   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:52.013456   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:52.013510   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:52.048928   69333 cri.go:89] found id: ""
	I0927 01:42:52.048951   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.048961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:52.048970   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:52.048984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:52.101043   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:52.101083   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:52.115903   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:52.115938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:52.197147   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:52.197168   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:52.197184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:52.276352   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:52.276393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:50.021730   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.520362   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:51.306847   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:53.307714   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.042729   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.544118   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.819649   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:54.832262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:54.832344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:54.867495   69333 cri.go:89] found id: ""
	I0927 01:42:54.867523   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.867533   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:54.867539   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:54.867585   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:54.899705   69333 cri.go:89] found id: ""
	I0927 01:42:54.899732   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.899742   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:54.899749   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:54.899817   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:54.939216   69333 cri.go:89] found id: ""
	I0927 01:42:54.939235   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.939244   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:54.939249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:54.939293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:54.976603   69333 cri.go:89] found id: ""
	I0927 01:42:54.976632   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.976643   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:54.976651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:54.976718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:55.011617   69333 cri.go:89] found id: ""
	I0927 01:42:55.011649   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.011660   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:55.011667   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:55.011729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:55.048836   69333 cri.go:89] found id: ""
	I0927 01:42:55.048861   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.048869   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:55.048885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:55.048955   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:55.085105   69333 cri.go:89] found id: ""
	I0927 01:42:55.085133   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.085144   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:55.085151   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:55.085205   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:55.122536   69333 cri.go:89] found id: ""
	I0927 01:42:55.122564   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.122575   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:55.122585   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:55.122600   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:55.197191   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:55.197216   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:55.197230   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:55.275914   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:55.275950   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:55.315043   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:55.315071   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:55.365808   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:55.365846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:55.025083   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.520041   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:55.807377   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.807419   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.808202   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.042511   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.541628   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.880934   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:57.894276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:57.894337   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:57.933299   69333 cri.go:89] found id: ""
	I0927 01:42:57.933326   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.933336   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:57.933343   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:57.933403   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:57.969070   69333 cri.go:89] found id: ""
	I0927 01:42:57.969094   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.969102   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:57.969107   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:57.969151   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:58.009432   69333 cri.go:89] found id: ""
	I0927 01:42:58.009453   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.009462   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:58.009468   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:58.009524   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:58.046507   69333 cri.go:89] found id: ""
	I0927 01:42:58.046526   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.046533   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:58.046539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:58.046603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:58.079910   69333 cri.go:89] found id: ""
	I0927 01:42:58.079936   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.079947   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:58.079954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:58.080015   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:58.115971   69333 cri.go:89] found id: ""
	I0927 01:42:58.115994   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.116001   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:58.116007   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:58.116065   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:58.150512   69333 cri.go:89] found id: ""
	I0927 01:42:58.150536   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.150544   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:58.150549   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:58.150608   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:58.183458   69333 cri.go:89] found id: ""
	I0927 01:42:58.183487   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.183498   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:58.183506   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:58.183520   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:58.234404   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:58.234434   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:58.248387   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:58.248411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:58.320751   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:58.320772   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:58.320783   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:58.401163   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:58.401212   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:00.943677   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:00.956739   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:00.956815   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:00.991020   69333 cri.go:89] found id: ""
	I0927 01:43:00.991042   69333 logs.go:276] 0 containers: []
	W0927 01:43:00.991051   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:00.991056   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:00.991113   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:01.031686   69333 cri.go:89] found id: ""
	I0927 01:43:01.031711   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.031720   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:01.031726   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:01.031786   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:01.068783   69333 cri.go:89] found id: ""
	I0927 01:43:01.068813   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.068824   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:01.068831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:01.068890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:01.108434   69333 cri.go:89] found id: ""
	I0927 01:43:01.108456   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.108464   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:01.108469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:01.108513   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:01.147574   69333 cri.go:89] found id: ""
	I0927 01:43:01.147596   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.147604   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:01.147610   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:01.147660   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:01.188251   69333 cri.go:89] found id: ""
	I0927 01:43:01.188279   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.188290   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:01.188297   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:01.188359   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:01.224901   69333 cri.go:89] found id: ""
	I0927 01:43:01.224944   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.224964   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:01.224974   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:01.225052   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:01.262701   69333 cri.go:89] found id: ""
	I0927 01:43:01.262728   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.262738   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:01.262749   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:01.262762   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:01.313872   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:01.313900   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:01.327809   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:01.327835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:01.400864   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:01.400895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:01.400909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:01.478012   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:01.478045   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:59.520973   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.522457   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:02.308215   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.309111   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.543151   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.043201   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.018634   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:04.032732   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:04.032803   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:04.075258   69333 cri.go:89] found id: ""
	I0927 01:43:04.075285   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.075293   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:04.075299   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:04.075381   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:04.108738   69333 cri.go:89] found id: ""
	I0927 01:43:04.108764   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.108773   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:04.108779   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:04.108835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:04.142115   69333 cri.go:89] found id: ""
	I0927 01:43:04.142145   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.142155   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:04.142174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:04.142249   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:04.184606   69333 cri.go:89] found id: ""
	I0927 01:43:04.184626   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.184634   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:04.184639   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:04.184684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:04.218391   69333 cri.go:89] found id: ""
	I0927 01:43:04.218420   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.218428   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:04.218434   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:04.218482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.253796   69333 cri.go:89] found id: ""
	I0927 01:43:04.253816   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.253824   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:04.253829   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:04.253884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:04.289147   69333 cri.go:89] found id: ""
	I0927 01:43:04.289170   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.289179   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:04.289184   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:04.289245   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:04.329000   69333 cri.go:89] found id: ""
	I0927 01:43:04.329026   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.329034   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:04.329042   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:04.329053   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:04.424255   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:04.424290   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:04.470746   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:04.470775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:04.524208   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:04.524237   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:04.538338   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:04.538365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:04.608713   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.109492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:07.124253   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:07.124332   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:07.160443   69333 cri.go:89] found id: ""
	I0927 01:43:07.160470   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.160481   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:07.160488   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:07.160554   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:07.195492   69333 cri.go:89] found id: ""
	I0927 01:43:07.195515   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.195522   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:07.195527   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:07.195572   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:07.237678   69333 cri.go:89] found id: ""
	I0927 01:43:07.237708   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.237718   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:07.237725   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:07.237792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:07.274239   69333 cri.go:89] found id: ""
	I0927 01:43:07.274268   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.274279   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:07.274286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:07.274352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:07.315099   69333 cri.go:89] found id: ""
	I0927 01:43:07.315124   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.315131   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:07.315137   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:07.315190   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.020911   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.520371   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.807124   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.306568   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.543210   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.042166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:07.356301   69333 cri.go:89] found id: ""
	I0927 01:43:07.356328   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.356339   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:07.356347   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:07.356416   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:07.392204   69333 cri.go:89] found id: ""
	I0927 01:43:07.392232   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.392242   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:07.392255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:07.392312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:07.428924   69333 cri.go:89] found id: ""
	I0927 01:43:07.428952   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.428961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:07.428969   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:07.428981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:07.502507   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.502531   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:07.502545   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:07.584169   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:07.584201   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:07.623413   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:07.623446   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:07.675444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:07.675480   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.190164   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:10.205315   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:10.205395   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:10.244030   69333 cri.go:89] found id: ""
	I0927 01:43:10.244053   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.244063   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:10.244071   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:10.244134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:10.280081   69333 cri.go:89] found id: ""
	I0927 01:43:10.280108   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.280118   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:10.280125   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:10.280184   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:10.315428   69333 cri.go:89] found id: ""
	I0927 01:43:10.315454   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.315464   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:10.315471   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:10.315531   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:10.352536   69333 cri.go:89] found id: ""
	I0927 01:43:10.352560   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.352567   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:10.352574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:10.352634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:10.388846   69333 cri.go:89] found id: ""
	I0927 01:43:10.388870   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.388880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:10.388887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:10.388951   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:10.427746   69333 cri.go:89] found id: ""
	I0927 01:43:10.427771   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.427779   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:10.427784   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:10.427839   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:10.473126   69333 cri.go:89] found id: ""
	I0927 01:43:10.473155   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.473166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:10.473172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:10.473234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:10.511925   69333 cri.go:89] found id: ""
	I0927 01:43:10.511954   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.511962   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:10.511971   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:10.511984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:10.551428   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:10.551459   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:10.603655   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:10.603691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.617232   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:10.617266   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:10.696559   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:10.696585   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:10.696599   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:09.020784   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.521429   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.307081   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.307876   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.043819   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.543289   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.273888   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:13.288271   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:13.288349   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:13.325796   69333 cri.go:89] found id: ""
	I0927 01:43:13.325823   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.325831   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:13.325837   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:13.325893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:13.360721   69333 cri.go:89] found id: ""
	I0927 01:43:13.360748   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.360756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:13.360762   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:13.360821   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:13.399722   69333 cri.go:89] found id: ""
	I0927 01:43:13.399749   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.399756   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:13.399762   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:13.399826   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:13.437161   69333 cri.go:89] found id: ""
	I0927 01:43:13.437187   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.437194   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:13.437200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:13.437260   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:13.474735   69333 cri.go:89] found id: ""
	I0927 01:43:13.474758   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.474766   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:13.474771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:13.474822   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:13.528726   69333 cri.go:89] found id: ""
	I0927 01:43:13.528754   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.528764   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:13.528771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:13.528837   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:13.568617   69333 cri.go:89] found id: ""
	I0927 01:43:13.568642   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.568651   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:13.568658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:13.568726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:13.605820   69333 cri.go:89] found id: ""
	I0927 01:43:13.605846   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.605857   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:13.605868   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:13.605883   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:13.682586   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:13.682609   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:13.682624   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:13.764487   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:13.764522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:13.809248   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:13.809280   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:13.861331   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:13.861371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:16.376981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:16.391787   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:16.391842   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:16.432731   69333 cri.go:89] found id: ""
	I0927 01:43:16.432758   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.432767   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:16.432775   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:16.432836   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:16.466769   69333 cri.go:89] found id: ""
	I0927 01:43:16.466798   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.466806   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:16.466812   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:16.466860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:16.501899   69333 cri.go:89] found id: ""
	I0927 01:43:16.501927   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.501940   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:16.501947   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:16.502000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:16.537356   69333 cri.go:89] found id: ""
	I0927 01:43:16.537383   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.537393   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:16.537401   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:16.537460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:16.573910   69333 cri.go:89] found id: ""
	I0927 01:43:16.573937   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.573946   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:16.573951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:16.574003   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:16.617780   69333 cri.go:89] found id: ""
	I0927 01:43:16.617808   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.617818   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:16.617826   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:16.617884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:16.653262   69333 cri.go:89] found id: ""
	I0927 01:43:16.653311   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.653323   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:16.653331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:16.653394   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:16.689861   69333 cri.go:89] found id: ""
	I0927 01:43:16.689889   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.689898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:16.689909   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:16.689922   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:16.765961   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:16.765986   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:16.766001   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:16.845195   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:16.845227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:16.889159   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:16.889188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:16.945523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:16.945558   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:13.522444   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.021202   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:15.808665   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.307884   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.043071   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.541709   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:19.461132   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:19.475148   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:19.475234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:19.511487   69333 cri.go:89] found id: ""
	I0927 01:43:19.511509   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.511517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:19.511522   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:19.511580   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:19.545726   69333 cri.go:89] found id: ""
	I0927 01:43:19.545750   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.545756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:19.545763   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:19.545830   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:19.581287   69333 cri.go:89] found id: ""
	I0927 01:43:19.581310   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.581318   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:19.581323   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:19.581376   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:19.614179   69333 cri.go:89] found id: ""
	I0927 01:43:19.614205   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.614215   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:19.614223   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:19.614286   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:19.648276   69333 cri.go:89] found id: ""
	I0927 01:43:19.648307   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.648318   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:19.648330   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:19.648390   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:19.683051   69333 cri.go:89] found id: ""
	I0927 01:43:19.683083   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.683094   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:19.683114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:19.683166   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:19.716664   69333 cri.go:89] found id: ""
	I0927 01:43:19.716686   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.716694   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:19.716700   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:19.716745   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:19.758948   69333 cri.go:89] found id: ""
	I0927 01:43:19.758969   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.758976   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:19.758984   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:19.758996   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:19.797751   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:19.797777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:19.853605   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:19.853635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:19.867785   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:19.867815   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:19.950323   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:19.950350   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:19.950363   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:18.520291   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.520845   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.520886   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.808171   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.812047   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:21.043160   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:23.546462   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.555421   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:22.570013   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:22.570077   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:22.605007   69333 cri.go:89] found id: ""
	I0927 01:43:22.605034   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.605055   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:22.605062   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:22.605122   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:22.640350   69333 cri.go:89] found id: ""
	I0927 01:43:22.640381   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.640391   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:22.640406   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:22.640482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:22.677464   69333 cri.go:89] found id: ""
	I0927 01:43:22.677489   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.677499   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:22.677506   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:22.677567   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:22.721978   69333 cri.go:89] found id: ""
	I0927 01:43:22.722017   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.722025   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:22.722032   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:22.722093   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:22.757694   69333 cri.go:89] found id: ""
	I0927 01:43:22.757720   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.757729   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:22.757733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:22.757781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:22.793872   69333 cri.go:89] found id: ""
	I0927 01:43:22.793903   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.793912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:22.793920   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:22.793971   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:22.830620   69333 cri.go:89] found id: ""
	I0927 01:43:22.830652   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.830662   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:22.830669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:22.830732   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:22.867341   69333 cri.go:89] found id: ""
	I0927 01:43:22.867370   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.867381   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:22.867392   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:22.867405   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:22.939592   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:22.939630   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:22.939654   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:23.016407   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:23.016447   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:23.058490   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:23.058522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:23.109527   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:23.109560   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:25.626109   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:25.645254   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:25.645343   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:25.707951   69333 cri.go:89] found id: ""
	I0927 01:43:25.707979   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.707989   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:25.707997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:25.708057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:25.771210   69333 cri.go:89] found id: ""
	I0927 01:43:25.771234   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.771242   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:25.771248   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:25.771295   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:25.808206   69333 cri.go:89] found id: ""
	I0927 01:43:25.808235   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.808245   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:25.808252   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:25.808311   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:25.842236   69333 cri.go:89] found id: ""
	I0927 01:43:25.842265   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.842275   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:25.842283   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:25.842328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:25.879220   69333 cri.go:89] found id: ""
	I0927 01:43:25.879248   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.879256   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:25.879262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:25.879333   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:25.913491   69333 cri.go:89] found id: ""
	I0927 01:43:25.913522   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.913532   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:25.913537   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:25.913595   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:25.946867   69333 cri.go:89] found id: ""
	I0927 01:43:25.946887   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.946894   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:25.946899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:25.946943   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:25.983792   69333 cri.go:89] found id: ""
	I0927 01:43:25.983813   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.983820   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:25.983828   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:25.983838   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:26.030169   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:26.030195   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:26.083242   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:26.083276   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:26.097109   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:26.097136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:26.168675   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:26.168703   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:26.168715   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:24.521923   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.020053   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:25.308150   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.308307   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:29.308818   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:26.042436   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.541895   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:30.542444   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.750349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:28.765211   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:28.765269   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:28.804760   69333 cri.go:89] found id: ""
	I0927 01:43:28.804784   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.804792   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:28.804798   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:28.804865   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:28.842576   69333 cri.go:89] found id: ""
	I0927 01:43:28.842597   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.842604   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:28.842612   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:28.842674   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:28.877498   69333 cri.go:89] found id: ""
	I0927 01:43:28.877529   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.877541   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:28.877553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:28.877615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:28.912583   69333 cri.go:89] found id: ""
	I0927 01:43:28.912609   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.912620   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:28.912627   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:28.912689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:28.947995   69333 cri.go:89] found id: ""
	I0927 01:43:28.948019   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.948030   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:28.948037   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:28.948135   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:28.984445   69333 cri.go:89] found id: ""
	I0927 01:43:28.984470   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.984480   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:28.984488   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:28.984551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:29.020345   69333 cri.go:89] found id: ""
	I0927 01:43:29.020374   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.020385   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:29.020392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:29.020451   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:29.056204   69333 cri.go:89] found id: ""
	I0927 01:43:29.056234   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.056245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:29.056257   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:29.056270   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:29.127936   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:29.127963   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:29.127980   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:29.205933   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:29.205981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.248745   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:29.248777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:29.302316   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:29.302348   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:31.817566   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:31.831179   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:31.831253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:31.868480   69333 cri.go:89] found id: ""
	I0927 01:43:31.868507   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.868517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:31.868528   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:31.868588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:31.901656   69333 cri.go:89] found id: ""
	I0927 01:43:31.901684   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.901694   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:31.901701   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:31.901761   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:31.937101   69333 cri.go:89] found id: ""
	I0927 01:43:31.937133   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.937145   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:31.937153   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:31.937210   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:31.970724   69333 cri.go:89] found id: ""
	I0927 01:43:31.970750   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.970761   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:31.970768   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:31.970835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:32.003704   69333 cri.go:89] found id: ""
	I0927 01:43:32.003736   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.003747   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:32.003754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:32.003813   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:32.038840   69333 cri.go:89] found id: ""
	I0927 01:43:32.038869   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.038879   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:32.038886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:32.038946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:32.075506   69333 cri.go:89] found id: ""
	I0927 01:43:32.075534   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.075545   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:32.075552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:32.075603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:32.112983   69333 cri.go:89] found id: ""
	I0927 01:43:32.113009   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.113020   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:32.113031   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:32.113046   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:32.168192   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:32.168227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:32.182702   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:32.182727   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:32.255797   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:32.255824   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:32.255835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:32.336083   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:32.336115   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.022764   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.520495   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.308851   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.807870   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.041600   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:34.880981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:34.894904   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:34.894976   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:34.933459   69333 cri.go:89] found id: ""
	I0927 01:43:34.933482   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.933490   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:34.933498   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:34.933555   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:34.966893   69333 cri.go:89] found id: ""
	I0927 01:43:34.966917   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.966926   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:34.966933   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:34.966992   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:35.002878   69333 cri.go:89] found id: ""
	I0927 01:43:35.002899   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.002907   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:35.002912   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:35.002970   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:35.039871   69333 cri.go:89] found id: ""
	I0927 01:43:35.039898   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.039908   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:35.039915   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:35.039977   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:35.078229   69333 cri.go:89] found id: ""
	I0927 01:43:35.078255   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.078267   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:35.078274   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:35.078342   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:35.114369   69333 cri.go:89] found id: ""
	I0927 01:43:35.114397   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.114408   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:35.114415   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:35.114475   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:35.148072   69333 cri.go:89] found id: ""
	I0927 01:43:35.148100   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.148110   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:35.148117   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:35.148188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:35.184020   69333 cri.go:89] found id: ""
	I0927 01:43:35.184051   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.184062   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:35.184073   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:35.184086   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:35.197332   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:35.197355   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:35.273860   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:35.273889   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:35.273904   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:35.354647   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:35.354680   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:35.392622   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:35.392651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:33.521889   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:36.020067   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.021354   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.808365   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.307251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.541793   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.043418   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.943024   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:37.957265   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:37.957329   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:37.991294   69333 cri.go:89] found id: ""
	I0927 01:43:37.991348   69333 logs.go:276] 0 containers: []
	W0927 01:43:37.991362   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:37.991368   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:37.991421   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:38.026960   69333 cri.go:89] found id: ""
	I0927 01:43:38.026981   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.026990   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:38.026998   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:38.027057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:38.063540   69333 cri.go:89] found id: ""
	I0927 01:43:38.063563   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.063571   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:38.063576   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:38.063627   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:38.099554   69333 cri.go:89] found id: ""
	I0927 01:43:38.099602   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.099613   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:38.099621   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:38.099689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:38.136576   69333 cri.go:89] found id: ""
	I0927 01:43:38.136604   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.136615   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:38.136623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:38.136676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:38.170411   69333 cri.go:89] found id: ""
	I0927 01:43:38.170441   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.170452   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:38.170458   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:38.170512   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:38.211902   69333 cri.go:89] found id: ""
	I0927 01:43:38.211934   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.211945   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:38.211951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:38.212007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:38.247850   69333 cri.go:89] found id: ""
	I0927 01:43:38.247875   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.247885   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:38.247895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:38.247913   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:38.329353   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:38.329384   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:38.369114   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:38.369148   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:38.420578   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:38.420613   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:38.434019   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:38.434050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:38.517921   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.018609   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:41.032308   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:41.032370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:41.068491   69333 cri.go:89] found id: ""
	I0927 01:43:41.068518   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.068529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:41.068536   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:41.068597   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:41.106527   69333 cri.go:89] found id: ""
	I0927 01:43:41.106555   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.106565   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:41.106571   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:41.106634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:41.142846   69333 cri.go:89] found id: ""
	I0927 01:43:41.142870   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.142880   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:41.142887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:41.142949   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:41.187499   69333 cri.go:89] found id: ""
	I0927 01:43:41.187525   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.187536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:41.187544   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:41.187606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:41.226040   69333 cri.go:89] found id: ""
	I0927 01:43:41.226063   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.226070   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:41.226076   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:41.226153   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:41.261399   69333 cri.go:89] found id: ""
	I0927 01:43:41.261429   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.261440   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:41.261446   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:41.261493   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:41.300709   69333 cri.go:89] found id: ""
	I0927 01:43:41.300730   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.300737   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:41.300743   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:41.300799   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:41.335725   69333 cri.go:89] found id: ""
	I0927 01:43:41.335751   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.335759   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:41.335767   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:41.335776   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:41.387756   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:41.387788   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:41.401717   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:41.401743   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:41.479524   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.479548   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:41.479562   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:41.559926   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:41.559959   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:40.520642   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.521344   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.307769   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.807328   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.541384   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.548925   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.107615   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:44.122628   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:44.122690   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:44.163496   69333 cri.go:89] found id: ""
	I0927 01:43:44.163521   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.163529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:44.163541   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:44.163588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:44.203488   69333 cri.go:89] found id: ""
	I0927 01:43:44.203519   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.203529   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:44.203535   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:44.203600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:44.238111   69333 cri.go:89] found id: ""
	I0927 01:43:44.238141   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.238148   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:44.238154   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:44.238221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:44.272954   69333 cri.go:89] found id: ""
	I0927 01:43:44.272981   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.272991   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:44.272998   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:44.273057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:44.309700   69333 cri.go:89] found id: ""
	I0927 01:43:44.309719   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.309726   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:44.309731   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:44.309776   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:44.344532   69333 cri.go:89] found id: ""
	I0927 01:43:44.344563   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.344573   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:44.344580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:44.344641   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:44.379354   69333 cri.go:89] found id: ""
	I0927 01:43:44.379380   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.379391   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:44.379399   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:44.379461   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:44.415297   69333 cri.go:89] found id: ""
	I0927 01:43:44.415344   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.415356   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:44.415366   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:44.415381   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:44.468570   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:44.468602   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:44.483419   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:44.483453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:44.560718   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:44.560737   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:44.560753   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:44.641130   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:44.641173   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.188520   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:47.202189   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:47.202262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:47.243051   69333 cri.go:89] found id: ""
	I0927 01:43:47.243075   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.243083   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:47.243089   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:47.243155   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:47.280071   69333 cri.go:89] found id: ""
	I0927 01:43:47.280094   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.280104   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:47.280111   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:47.280170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:47.318458   69333 cri.go:89] found id: ""
	I0927 01:43:47.318482   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.318492   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:47.318499   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:47.318551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:45.023799   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.522945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:45.307910   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.309781   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.807329   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.041371   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.042307   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.352891   69333 cri.go:89] found id: ""
	I0927 01:43:47.352916   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.352926   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:47.352933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:47.352997   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:47.387534   69333 cri.go:89] found id: ""
	I0927 01:43:47.387560   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.387569   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:47.387578   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:47.387646   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:47.422221   69333 cri.go:89] found id: ""
	I0927 01:43:47.422254   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.422265   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:47.422273   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:47.422330   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:47.459624   69333 cri.go:89] found id: ""
	I0927 01:43:47.459645   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.459653   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:47.459659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:47.459706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:47.494322   69333 cri.go:89] found id: ""
	I0927 01:43:47.494347   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.494355   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:47.494363   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:47.494375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:47.508031   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:47.508056   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:47.583920   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:47.583952   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:47.583968   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:47.665533   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:47.665568   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.708423   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:47.708455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.261602   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:50.275548   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:50.275607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:50.311583   69333 cri.go:89] found id: ""
	I0927 01:43:50.311610   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.311620   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:50.311627   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:50.311687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:50.347686   69333 cri.go:89] found id: ""
	I0927 01:43:50.347709   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.347721   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:50.347729   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:50.347778   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:50.386627   69333 cri.go:89] found id: ""
	I0927 01:43:50.386654   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.386663   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:50.386669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:50.386719   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:50.421512   69333 cri.go:89] found id: ""
	I0927 01:43:50.421538   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.421547   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:50.421552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:50.421603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:50.461849   69333 cri.go:89] found id: ""
	I0927 01:43:50.461872   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.461880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:50.461885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:50.461941   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:50.496517   69333 cri.go:89] found id: ""
	I0927 01:43:50.496540   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.496548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:50.496554   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:50.496600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:50.532595   69333 cri.go:89] found id: ""
	I0927 01:43:50.532619   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.532630   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:50.532638   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:50.532687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:50.573213   69333 cri.go:89] found id: ""
	I0927 01:43:50.573241   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.573252   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:50.573262   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:50.573275   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.625600   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:50.625633   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:50.639512   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:50.639535   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:50.708393   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:50.708415   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:50.708436   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:50.789812   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:50.789845   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:50.020837   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:52.021314   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.807713   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:54.308918   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.541348   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.542994   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.335858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:53.349369   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:53.349441   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:53.386922   69333 cri.go:89] found id: ""
	I0927 01:43:53.386947   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.386955   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:53.386961   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:53.387007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:53.423614   69333 cri.go:89] found id: ""
	I0927 01:43:53.423640   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.423651   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:53.423658   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:53.423721   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:53.463245   69333 cri.go:89] found id: ""
	I0927 01:43:53.463265   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.463273   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:53.463280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:53.463344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:53.502093   69333 cri.go:89] found id: ""
	I0927 01:43:53.502123   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.502133   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:53.502140   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:53.502196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:53.538616   69333 cri.go:89] found id: ""
	I0927 01:43:53.538641   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.538652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:53.538659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:53.538716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:53.578580   69333 cri.go:89] found id: ""
	I0927 01:43:53.578609   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.578617   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:53.578623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:53.578685   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:53.615240   69333 cri.go:89] found id: ""
	I0927 01:43:53.615266   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.615275   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:53.615282   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:53.615356   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:53.650987   69333 cri.go:89] found id: ""
	I0927 01:43:53.651011   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.651019   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:53.651028   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:53.651038   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:53.664817   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:53.664841   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:53.737875   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:53.737894   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:53.737909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:53.827293   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:53.827345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:53.867157   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:53.867188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:56.423435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:56.437837   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:56.437912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:56.480328   69333 cri.go:89] found id: ""
	I0927 01:43:56.480349   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.480357   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:56.480364   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:56.480427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:56.520627   69333 cri.go:89] found id: ""
	I0927 01:43:56.520651   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.520660   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:56.520667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:56.520726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:56.561527   69333 cri.go:89] found id: ""
	I0927 01:43:56.561555   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.561567   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:56.561574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:56.561634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:56.598751   69333 cri.go:89] found id: ""
	I0927 01:43:56.598783   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.598794   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:56.598801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:56.598861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:56.634378   69333 cri.go:89] found id: ""
	I0927 01:43:56.634410   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.634422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:56.634429   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:56.634489   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:56.669819   69333 cri.go:89] found id: ""
	I0927 01:43:56.669852   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.669863   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:56.669877   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:56.669929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:56.703715   69333 cri.go:89] found id: ""
	I0927 01:43:56.703740   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.703750   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:56.703757   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:56.703820   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:56.737208   69333 cri.go:89] found id: ""
	I0927 01:43:56.737234   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.737245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:56.737255   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:56.737269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:56.749933   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:56.749960   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:56.822331   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:56.822353   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:56.822369   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:56.904415   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:56.904454   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:56.947108   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:56.947136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:54.521004   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.807935   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.808046   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.041831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.042496   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:00.542924   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:59.500580   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:59.523807   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:59.523888   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:59.562931   69333 cri.go:89] found id: ""
	I0927 01:43:59.562955   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.562963   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:59.562968   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:59.563013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:59.599321   69333 cri.go:89] found id: ""
	I0927 01:43:59.599348   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.599358   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:59.599363   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:59.599418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:59.634404   69333 cri.go:89] found id: ""
	I0927 01:43:59.634431   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.634441   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:59.634448   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:59.634498   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:59.672022   69333 cri.go:89] found id: ""
	I0927 01:43:59.672052   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.672066   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:59.672074   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:59.672134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:59.704617   69333 cri.go:89] found id: ""
	I0927 01:43:59.704647   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.704657   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:59.704664   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:59.704712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:59.740479   69333 cri.go:89] found id: ""
	I0927 01:43:59.740504   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.740512   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:59.740517   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:59.740579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:59.777123   69333 cri.go:89] found id: ""
	I0927 01:43:59.777155   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.777166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:59.777174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:59.777234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:59.817780   69333 cri.go:89] found id: ""
	I0927 01:43:59.817803   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.817825   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:59.817841   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:59.817856   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:59.831252   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:59.831278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:59.901912   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:59.901936   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:59.901949   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:59.983001   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:59.983034   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:00.030989   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:00.031020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:59.020139   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.020925   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.306853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.308075   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.042494   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.043814   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:02.583949   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:02.596723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:02.596798   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:02.630927   69333 cri.go:89] found id: ""
	I0927 01:44:02.630953   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.630962   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:02.630967   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:02.631012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:02.664156   69333 cri.go:89] found id: ""
	I0927 01:44:02.664186   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.664198   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:02.664205   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:02.664259   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:02.698823   69333 cri.go:89] found id: ""
	I0927 01:44:02.698847   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.698860   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:02.698865   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:02.698913   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:02.736114   69333 cri.go:89] found id: ""
	I0927 01:44:02.736142   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.736154   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:02.736161   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:02.736221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:02.769739   69333 cri.go:89] found id: ""
	I0927 01:44:02.769763   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.769771   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:02.769785   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:02.769844   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:02.804798   69333 cri.go:89] found id: ""
	I0927 01:44:02.804871   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.804887   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:02.804898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:02.804958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:02.841197   69333 cri.go:89] found id: ""
	I0927 01:44:02.841224   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.841236   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:02.841243   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:02.841301   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:02.881278   69333 cri.go:89] found id: ""
	I0927 01:44:02.881310   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.881321   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:02.881331   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:02.881345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:02.935149   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:02.935183   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:02.950245   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:02.950273   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:03.020241   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:03.020263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:03.020277   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:03.104467   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:03.104503   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:05.643070   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:05.656656   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:05.656716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:05.694022   69333 cri.go:89] found id: ""
	I0927 01:44:05.694045   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.694053   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:05.694059   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:05.694123   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:05.728575   69333 cri.go:89] found id: ""
	I0927 01:44:05.728600   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.728607   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:05.728613   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:05.728667   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:05.768546   69333 cri.go:89] found id: ""
	I0927 01:44:05.768572   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.768583   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:05.768590   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:05.768652   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:05.809504   69333 cri.go:89] found id: ""
	I0927 01:44:05.809527   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.809536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:05.809543   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:05.809600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:05.846387   69333 cri.go:89] found id: ""
	I0927 01:44:05.846415   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.846422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:05.846428   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:05.846479   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:05.879579   69333 cri.go:89] found id: ""
	I0927 01:44:05.879608   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.879619   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:05.879626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:05.879684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:05.928932   69333 cri.go:89] found id: ""
	I0927 01:44:05.928961   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.928970   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:05.928978   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:05.929037   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:05.986463   69333 cri.go:89] found id: ""
	I0927 01:44:05.986490   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.986499   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:05.986507   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:05.986521   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:06.039984   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:06.040011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:06.053025   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:06.053051   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:06.127277   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:06.127316   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:06.127330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:06.201473   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:06.201504   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:03.520539   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:06.021584   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.808474   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.307407   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:07.542959   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.042223   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.739339   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:08.753354   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:08.753418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:08.788513   69333 cri.go:89] found id: ""
	I0927 01:44:08.788544   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.788556   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:08.788563   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:08.788648   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:08.824615   69333 cri.go:89] found id: ""
	I0927 01:44:08.824642   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.824653   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:08.824661   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:08.824724   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:08.858327   69333 cri.go:89] found id: ""
	I0927 01:44:08.858354   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.858365   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:08.858372   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:08.858430   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:08.896140   69333 cri.go:89] found id: ""
	I0927 01:44:08.896168   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.896175   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:08.896181   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:08.896229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:08.931525   69333 cri.go:89] found id: ""
	I0927 01:44:08.931547   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.931554   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:08.931560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:08.931618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:08.970224   69333 cri.go:89] found id: ""
	I0927 01:44:08.970246   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.970256   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:08.970263   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:08.970331   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:09.007213   69333 cri.go:89] found id: ""
	I0927 01:44:09.007240   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.007248   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:09.007255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:09.007334   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:09.043078   69333 cri.go:89] found id: ""
	I0927 01:44:09.043111   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.043122   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:09.043132   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:09.043147   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:09.096768   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:09.096801   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:09.110721   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:09.110747   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:09.182966   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:09.182990   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:09.183004   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:09.259497   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:09.259541   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:11.797307   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:11.812141   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:11.812196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:11.846429   69333 cri.go:89] found id: ""
	I0927 01:44:11.846468   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.846482   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:11.846489   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:11.846598   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:11.885294   69333 cri.go:89] found id: ""
	I0927 01:44:11.885322   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.885333   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:11.885339   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:11.885398   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:11.920856   69333 cri.go:89] found id: ""
	I0927 01:44:11.920884   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.920892   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:11.920898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:11.920946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:11.964540   69333 cri.go:89] found id: ""
	I0927 01:44:11.964564   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.964574   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:11.964581   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:11.964634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:12.000596   69333 cri.go:89] found id: ""
	I0927 01:44:12.000619   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.000629   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:12.000636   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:12.000697   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:12.037773   69333 cri.go:89] found id: ""
	I0927 01:44:12.037808   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.037819   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:12.037831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:12.037893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:12.074646   69333 cri.go:89] found id: ""
	I0927 01:44:12.074676   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.074687   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:12.074692   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:12.074740   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:12.111771   69333 cri.go:89] found id: ""
	I0927 01:44:12.111802   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.111813   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:12.111824   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:12.111837   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:12.160938   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:12.160971   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:12.175576   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:12.175605   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:12.245227   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:12.245263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:12.245278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:12.325161   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:12.325194   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:08.520111   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.520326   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.520755   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.808039   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.808843   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.042905   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.542272   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.867795   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:14.881053   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:14.881130   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:14.915193   69333 cri.go:89] found id: ""
	I0927 01:44:14.915224   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.915234   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:14.915241   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:14.915318   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:14.951758   69333 cri.go:89] found id: ""
	I0927 01:44:14.951789   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.951801   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:14.951808   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:14.951860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:14.987875   69333 cri.go:89] found id: ""
	I0927 01:44:14.987906   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.987917   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:14.987924   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:14.987985   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:15.025780   69333 cri.go:89] found id: ""
	I0927 01:44:15.025810   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.025820   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:15.025828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:15.025884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:15.062135   69333 cri.go:89] found id: ""
	I0927 01:44:15.062157   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.062165   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:15.062172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:15.062225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:15.097090   69333 cri.go:89] found id: ""
	I0927 01:44:15.097112   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.097119   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:15.097126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:15.097170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:15.130528   69333 cri.go:89] found id: ""
	I0927 01:44:15.130552   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.130561   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:15.130569   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:15.130615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:15.165422   69333 cri.go:89] found id: ""
	I0927 01:44:15.165450   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.165457   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:15.165465   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:15.165474   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:15.214612   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:15.214651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:15.230294   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:15.230318   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:15.303339   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:15.303362   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:15.303375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:15.382046   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:15.382081   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:14.520833   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.021034   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:15.308397   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.808221   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:16.542334   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:18.543785   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.923331   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:17.937693   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:17.937765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:17.972677   69333 cri.go:89] found id: ""
	I0927 01:44:17.972699   69333 logs.go:276] 0 containers: []
	W0927 01:44:17.972707   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:17.972714   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:17.972764   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:18.004818   69333 cri.go:89] found id: ""
	I0927 01:44:18.004846   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.004854   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:18.004860   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:18.004907   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:18.044693   69333 cri.go:89] found id: ""
	I0927 01:44:18.044716   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.044723   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:18.044728   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:18.044772   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:18.079205   69333 cri.go:89] found id: ""
	I0927 01:44:18.079235   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.079244   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:18.079249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:18.079299   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:18.115272   69333 cri.go:89] found id: ""
	I0927 01:44:18.115322   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.115335   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:18.115343   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:18.115412   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:18.150165   69333 cri.go:89] found id: ""
	I0927 01:44:18.150195   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.150206   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:18.150213   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:18.150275   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:18.184971   69333 cri.go:89] found id: ""
	I0927 01:44:18.184999   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.185009   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:18.185016   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:18.185083   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:18.219955   69333 cri.go:89] found id: ""
	I0927 01:44:18.219985   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.219997   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:18.220008   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:18.220020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:18.269713   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:18.269748   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:18.285224   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:18.285251   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:18.364887   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:18.364912   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:18.364927   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:18.450667   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:18.450706   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:20.991648   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:21.006472   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:21.006529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:21.043455   69333 cri.go:89] found id: ""
	I0927 01:44:21.043476   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.043486   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:21.043493   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:21.043549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:21.080365   69333 cri.go:89] found id: ""
	I0927 01:44:21.080391   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.080399   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:21.080405   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:21.080449   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:21.117576   69333 cri.go:89] found id: ""
	I0927 01:44:21.117624   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.117636   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:21.117642   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:21.117703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:21.154538   69333 cri.go:89] found id: ""
	I0927 01:44:21.154564   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.154576   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:21.154584   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:21.154638   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:21.190046   69333 cri.go:89] found id: ""
	I0927 01:44:21.190070   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.190080   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:21.190086   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:21.190147   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:21.226383   69333 cri.go:89] found id: ""
	I0927 01:44:21.226407   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.226417   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:21.226424   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:21.226485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:21.262090   69333 cri.go:89] found id: ""
	I0927 01:44:21.262113   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.262124   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:21.262132   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:21.262188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:21.297675   69333 cri.go:89] found id: ""
	I0927 01:44:21.297697   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.297706   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:21.297716   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:21.297728   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:21.349668   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:21.349705   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:21.364608   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:21.364635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:21.432570   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:21.432596   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:21.432612   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:21.507616   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:21.507661   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:19.520792   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.521341   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:20.307600   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:22.308557   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.807578   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.041736   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:23.041809   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:25.540974   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.054212   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:24.067954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:24.068014   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:24.107017   69333 cri.go:89] found id: ""
	I0927 01:44:24.107045   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.107056   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:24.107063   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:24.107124   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:24.144373   69333 cri.go:89] found id: ""
	I0927 01:44:24.144398   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.144406   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:24.144411   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:24.144473   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:24.180010   69333 cri.go:89] found id: ""
	I0927 01:44:24.180038   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.180048   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:24.180056   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:24.180118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:24.214387   69333 cri.go:89] found id: ""
	I0927 01:44:24.214413   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.214421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:24.214426   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:24.214472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:24.252597   69333 cri.go:89] found id: ""
	I0927 01:44:24.252623   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.252631   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:24.252643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:24.252705   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.292044   69333 cri.go:89] found id: ""
	I0927 01:44:24.292072   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.292082   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:24.292089   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:24.292158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:24.329899   69333 cri.go:89] found id: ""
	I0927 01:44:24.329924   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.329934   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:24.329940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:24.329998   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:24.367964   69333 cri.go:89] found id: ""
	I0927 01:44:24.367989   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.368000   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:24.368010   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:24.368025   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:24.384151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:24.384184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:24.456916   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:24.456940   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:24.456958   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:24.539362   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:24.539399   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:24.578384   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:24.578411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.132700   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:27.146218   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.146294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.180958   69333 cri.go:89] found id: ""
	I0927 01:44:27.180984   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.180992   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:27.180997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.181043   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.215213   69333 cri.go:89] found id: ""
	I0927 01:44:27.215236   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.215243   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:27.215249   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.215293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.258192   69333 cri.go:89] found id: ""
	I0927 01:44:27.258216   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.258226   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:27.258233   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.258289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.292717   69333 cri.go:89] found id: ""
	I0927 01:44:27.292742   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.292753   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:27.292760   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.292818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.328038   69333 cri.go:89] found id: ""
	I0927 01:44:27.328066   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.328076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:27.328083   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.328152   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.021885   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:26.520726   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.308923   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:29.807825   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.542683   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.042293   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.363513   69333 cri.go:89] found id: ""
	I0927 01:44:27.363539   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.363548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:27.363553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.363610   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.402201   69333 cri.go:89] found id: ""
	I0927 01:44:27.402223   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.402231   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:27.402237   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.402290   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.436952   69333 cri.go:89] found id: ""
	I0927 01:44:27.436979   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.436987   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:27.436995   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.437009   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.487908   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:27.487938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:27.502170   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:27.502199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:27.583909   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:27.583931   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:27.583943   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:27.660248   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:27.660286   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:30.201211   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:30.214276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:30.214350   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:30.252445   69333 cri.go:89] found id: ""
	I0927 01:44:30.252474   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.252484   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:30.252490   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:30.252538   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:30.287574   69333 cri.go:89] found id: ""
	I0927 01:44:30.287603   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.287614   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:30.287621   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:30.287693   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:30.324674   69333 cri.go:89] found id: ""
	I0927 01:44:30.324699   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.324711   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:30.324718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:30.324779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:30.360493   69333 cri.go:89] found id: ""
	I0927 01:44:30.360521   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.360531   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:30.360539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:30.360640   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:30.396219   69333 cri.go:89] found id: ""
	I0927 01:44:30.396252   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.396263   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:30.396270   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:30.396328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:30.431524   69333 cri.go:89] found id: ""
	I0927 01:44:30.431546   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.431558   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:30.431564   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:30.431607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:30.465887   69333 cri.go:89] found id: ""
	I0927 01:44:30.465915   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.465926   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:30.465933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:30.466000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:30.501364   69333 cri.go:89] found id: ""
	I0927 01:44:30.501391   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.501402   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:30.501411   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:30.501425   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:30.556344   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:30.556377   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:30.572619   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:30.572649   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:30.645996   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:30.646020   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:30.646032   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:30.737458   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:30.737531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:28.521312   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.521421   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.020699   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:31.807949   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.809414   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:32.045244   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:34.542035   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.284306   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:33.298164   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:33.298224   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:33.334599   69333 cri.go:89] found id: ""
	I0927 01:44:33.334625   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.334634   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:33.334654   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:33.334718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:33.369006   69333 cri.go:89] found id: ""
	I0927 01:44:33.369034   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.369044   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:33.369051   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:33.369119   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:33.407875   69333 cri.go:89] found id: ""
	I0927 01:44:33.407904   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.407912   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:33.407918   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:33.407974   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:33.441048   69333 cri.go:89] found id: ""
	I0927 01:44:33.441083   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.441094   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:33.441101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:33.441156   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:33.478458   69333 cri.go:89] found id: ""
	I0927 01:44:33.478503   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.478515   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:33.478522   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:33.478586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:33.513756   69333 cri.go:89] found id: ""
	I0927 01:44:33.513784   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.513795   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:33.513802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:33.513862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:33.554351   69333 cri.go:89] found id: ""
	I0927 01:44:33.554392   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.554403   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:33.554410   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:33.554472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:33.588484   69333 cri.go:89] found id: ""
	I0927 01:44:33.588512   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.588533   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:33.588544   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:33.588559   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:33.665735   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:33.665775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:33.704654   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:33.704687   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:33.755444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:33.755475   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:33.770069   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:33.770095   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:33.841531   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.341963   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:36.355219   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:36.355294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:36.395149   69333 cri.go:89] found id: ""
	I0927 01:44:36.395185   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.395196   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:36.395203   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:36.395262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:36.434620   69333 cri.go:89] found id: ""
	I0927 01:44:36.434649   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.434661   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:36.434667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:36.434729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:36.468328   69333 cri.go:89] found id: ""
	I0927 01:44:36.468349   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.468357   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:36.468362   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:36.468427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:36.506386   69333 cri.go:89] found id: ""
	I0927 01:44:36.506413   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.506421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:36.506427   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:36.506482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:36.546583   69333 cri.go:89] found id: ""
	I0927 01:44:36.546607   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.546614   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:36.546620   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:36.546665   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:36.581694   69333 cri.go:89] found id: ""
	I0927 01:44:36.581721   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.581730   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:36.581737   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:36.581782   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:36.617775   69333 cri.go:89] found id: ""
	I0927 01:44:36.617799   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.617807   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:36.617813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:36.617877   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:36.654443   69333 cri.go:89] found id: ""
	I0927 01:44:36.654470   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.654478   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:36.654486   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:36.654496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:36.705787   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:36.705817   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:36.720643   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:36.720677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:36.800037   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.800061   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:36.800091   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:36.886845   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:36.886884   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:35.023634   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.520794   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:36.307516   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:38.307899   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.041620   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.044257   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.429349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:39.442899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:39.442973   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:39.481752   69333 cri.go:89] found id: ""
	I0927 01:44:39.481782   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.481793   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:39.481799   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:39.481858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:39.516074   69333 cri.go:89] found id: ""
	I0927 01:44:39.516103   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.516114   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:39.516130   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:39.516188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.563351   69333 cri.go:89] found id: ""
	I0927 01:44:39.563375   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.563386   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:39.563392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.563455   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.601417   69333 cri.go:89] found id: ""
	I0927 01:44:39.601445   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.601455   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:39.601469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.601529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.634537   69333 cri.go:89] found id: ""
	I0927 01:44:39.634565   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.634576   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:39.634582   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.634642   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.668910   69333 cri.go:89] found id: ""
	I0927 01:44:39.668937   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.668948   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:39.668955   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.669013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.701992   69333 cri.go:89] found id: ""
	I0927 01:44:39.702014   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.702021   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:39.702027   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.702074   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.741579   69333 cri.go:89] found id: ""
	I0927 01:44:39.741601   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.741610   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:39.741618   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:39.741627   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:39.806476   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.806510   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.820228   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:39.820255   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:39.893137   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:39.893167   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.893181   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.974477   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.974514   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:40.021226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.521217   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:40.309154   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.808724   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:41.542308   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:44.042015   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.517449   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:42.532200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:42.532266   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:42.568872   69333 cri.go:89] found id: ""
	I0927 01:44:42.568901   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.568911   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:42.568919   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:42.568980   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:42.605069   69333 cri.go:89] found id: ""
	I0927 01:44:42.605220   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.605251   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:42.605261   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:42.605335   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:42.641637   69333 cri.go:89] found id: ""
	I0927 01:44:42.641665   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.641673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:42.641680   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:42.641742   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:42.677333   69333 cri.go:89] found id: ""
	I0927 01:44:42.677361   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.677376   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:42.677382   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:42.677439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:42.712456   69333 cri.go:89] found id: ""
	I0927 01:44:42.712484   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.712495   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:42.712501   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:42.712565   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:42.745109   69333 cri.go:89] found id: ""
	I0927 01:44:42.745140   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.745150   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:42.745157   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:42.745226   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:42.779427   69333 cri.go:89] found id: ""
	I0927 01:44:42.779449   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.779457   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:42.779462   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:42.779508   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:42.823920   69333 cri.go:89] found id: ""
	I0927 01:44:42.823946   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.823954   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:42.823963   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:42.823972   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:42.881345   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:42.881380   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:42.896076   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:42.896100   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:42.971775   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:42.971796   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:42.971809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:43.054461   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:43.054494   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.596681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:45.610817   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:45.610882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:45.647628   69333 cri.go:89] found id: ""
	I0927 01:44:45.647654   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.647662   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:45.647668   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:45.647715   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:45.685480   69333 cri.go:89] found id: ""
	I0927 01:44:45.685507   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.685514   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:45.685520   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:45.685573   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:45.721601   69333 cri.go:89] found id: ""
	I0927 01:44:45.721624   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.721632   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:45.721637   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:45.721700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:45.756763   69333 cri.go:89] found id: ""
	I0927 01:44:45.756788   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.756796   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:45.756802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:45.756858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:45.792891   69333 cri.go:89] found id: ""
	I0927 01:44:45.792917   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.792927   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:45.792934   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:45.792996   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:45.828716   69333 cri.go:89] found id: ""
	I0927 01:44:45.828739   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.828747   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:45.828753   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:45.828807   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:45.868813   69333 cri.go:89] found id: ""
	I0927 01:44:45.868840   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.868848   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:45.868853   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:45.868905   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:45.907281   69333 cri.go:89] found id: ""
	I0927 01:44:45.907327   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.907341   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:45.907352   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:45.907371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:45.958539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:45.958574   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:45.972540   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:45.972567   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:46.046083   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:46.046124   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:46.046141   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:46.124313   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:46.124349   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.021100   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.021435   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:45.307916   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.807187   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:49.809212   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:46.042143   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.541984   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:50.542678   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.673701   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:48.687673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:48.687744   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:48.722269   69333 cri.go:89] found id: ""
	I0927 01:44:48.722291   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.722302   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:48.722308   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:48.722370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:48.758297   69333 cri.go:89] found id: ""
	I0927 01:44:48.758318   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.758326   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:48.758331   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:48.758377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:48.792706   69333 cri.go:89] found id: ""
	I0927 01:44:48.792730   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.792738   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:48.792744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:48.792792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:48.827015   69333 cri.go:89] found id: ""
	I0927 01:44:48.827035   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.827047   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:48.827052   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:48.827095   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:48.862538   69333 cri.go:89] found id: ""
	I0927 01:44:48.862564   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.862572   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:48.862577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:48.862632   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:48.896118   69333 cri.go:89] found id: ""
	I0927 01:44:48.896144   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.896154   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:48.896166   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:48.896225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:48.932483   69333 cri.go:89] found id: ""
	I0927 01:44:48.932511   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.932519   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:48.932524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:48.932576   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:48.971864   69333 cri.go:89] found id: ""
	I0927 01:44:48.971890   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.971898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:48.971906   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:48.971919   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:49.028163   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:49.028199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:49.042780   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:49.042805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:49.116454   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:49.116476   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:49.116491   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.196048   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:49.196084   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:51.735108   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:51.749191   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:51.749258   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:51.784776   69333 cri.go:89] found id: ""
	I0927 01:44:51.784804   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.784815   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:51.784823   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:51.784880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:51.822807   69333 cri.go:89] found id: ""
	I0927 01:44:51.822836   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.822847   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:51.822854   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:51.822912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:51.858700   69333 cri.go:89] found id: ""
	I0927 01:44:51.858726   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.858737   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:51.858744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:51.858812   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:51.894945   69333 cri.go:89] found id: ""
	I0927 01:44:51.894968   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.894975   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:51.894980   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:51.895025   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:51.939475   69333 cri.go:89] found id: ""
	I0927 01:44:51.939503   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.939518   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:51.939524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:51.939569   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:51.982626   69333 cri.go:89] found id: ""
	I0927 01:44:51.982654   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.982665   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:51.982673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:51.982731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:52.050446   69333 cri.go:89] found id: ""
	I0927 01:44:52.050473   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.050483   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:52.050490   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:52.050549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:52.092637   69333 cri.go:89] found id: ""
	I0927 01:44:52.092666   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.092676   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:52.092686   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:52.092700   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:52.132135   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:52.132165   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:52.186537   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:52.186572   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:52.200001   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:52.200027   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:52.282068   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:52.282093   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:52.282108   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.021229   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.308560   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.309001   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:53.042624   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:55.043212   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.866565   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:54.880400   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:54.880460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:54.918963   69333 cri.go:89] found id: ""
	I0927 01:44:54.919004   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.919027   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:54.919036   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:54.919107   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:54.959918   69333 cri.go:89] found id: ""
	I0927 01:44:54.959947   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.959958   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:54.959965   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:54.960026   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:55.004348   69333 cri.go:89] found id: ""
	I0927 01:44:55.004370   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.004378   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:55.004392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:55.004446   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:55.045190   69333 cri.go:89] found id: ""
	I0927 01:44:55.045213   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.045220   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:55.045225   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:55.045278   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:55.087638   69333 cri.go:89] found id: ""
	I0927 01:44:55.087663   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.087671   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:55.087677   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:55.087739   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:55.126899   69333 cri.go:89] found id: ""
	I0927 01:44:55.126932   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.126943   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:55.126951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:55.127012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:55.167593   69333 cri.go:89] found id: ""
	I0927 01:44:55.167624   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.167635   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:55.167643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:55.167706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:55.208362   69333 cri.go:89] found id: ""
	I0927 01:44:55.208388   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.208399   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:55.208409   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:55.208424   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:55.247198   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:55.247221   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:55.299408   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:55.299443   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:55.315745   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:55.315775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:55.387499   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:55.387523   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:55.387539   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:54.021502   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.520627   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.807487   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:58.807902   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.541517   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:59.542233   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.968863   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:57.987921   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:57.987988   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:58.036770   69333 cri.go:89] found id: ""
	I0927 01:44:58.036802   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.036813   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:58.036824   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:58.036878   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:58.072461   69333 cri.go:89] found id: ""
	I0927 01:44:58.072484   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.072492   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:58.072499   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:58.072551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:58.107247   69333 cri.go:89] found id: ""
	I0927 01:44:58.107273   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.107284   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:58.107290   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:58.107365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:58.149050   69333 cri.go:89] found id: ""
	I0927 01:44:58.149080   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.149091   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:58.149099   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:58.149162   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:58.188167   69333 cri.go:89] found id: ""
	I0927 01:44:58.188198   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.188209   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:58.188217   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:58.188283   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:58.224291   69333 cri.go:89] found id: ""
	I0927 01:44:58.224319   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.224329   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:58.224337   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:58.224401   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:58.258786   69333 cri.go:89] found id: ""
	I0927 01:44:58.258813   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.258822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:58.258828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:58.258885   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:58.298310   69333 cri.go:89] found id: ""
	I0927 01:44:58.298338   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.298349   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:58.298359   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:58.298373   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:58.340299   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:58.340330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.395097   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:58.395130   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:58.410653   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:58.410677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:58.479437   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:58.479459   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:58.479470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.057473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:01.071746   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:01.071818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:01.112652   69333 cri.go:89] found id: ""
	I0927 01:45:01.112676   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.112684   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:01.112690   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:01.112735   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:01.146071   69333 cri.go:89] found id: ""
	I0927 01:45:01.146100   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.146111   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:01.146119   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:01.146188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:01.188640   69333 cri.go:89] found id: ""
	I0927 01:45:01.188663   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.188673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:01.188679   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:01.188743   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:01.225024   69333 cri.go:89] found id: ""
	I0927 01:45:01.225050   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.225060   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:01.225067   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:01.225128   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:01.262459   69333 cri.go:89] found id: ""
	I0927 01:45:01.262487   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.262498   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:01.262505   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:01.262560   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:01.298567   69333 cri.go:89] found id: ""
	I0927 01:45:01.298588   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.298597   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:01.298603   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:01.298647   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:01.335051   69333 cri.go:89] found id: ""
	I0927 01:45:01.335084   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.335094   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:01.335100   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:01.335149   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:01.371187   69333 cri.go:89] found id: ""
	I0927 01:45:01.371217   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.371227   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:01.371237   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:01.371252   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:01.385163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:01.385189   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:01.457256   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:01.457298   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:01.457313   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.537788   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:01.537819   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:01.580645   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:01.580672   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.521367   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.020826   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.021213   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:00.808021   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.307242   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.542831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.042010   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.131877   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:04.145175   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:04.145248   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:04.179508   69333 cri.go:89] found id: ""
	I0927 01:45:04.179535   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.179545   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:04.179552   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:04.179612   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:04.213497   69333 cri.go:89] found id: ""
	I0927 01:45:04.213533   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.213544   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:04.213551   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:04.213606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:04.249708   69333 cri.go:89] found id: ""
	I0927 01:45:04.249737   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.249747   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:04.249754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:04.249824   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:04.288283   69333 cri.go:89] found id: ""
	I0927 01:45:04.288306   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.288314   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:04.288319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:04.288368   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:04.325515   69333 cri.go:89] found id: ""
	I0927 01:45:04.325539   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.325549   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:04.325560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:04.325618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:04.363485   69333 cri.go:89] found id: ""
	I0927 01:45:04.363511   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.363521   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:04.363528   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:04.363586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:04.398834   69333 cri.go:89] found id: ""
	I0927 01:45:04.398863   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.398875   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:04.398882   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:04.398948   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:04.433408   69333 cri.go:89] found id: ""
	I0927 01:45:04.433435   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.433443   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:04.433451   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:04.433461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:04.485354   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:04.485392   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:04.499007   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:04.499031   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:04.569376   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:04.569405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:04.569420   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:04.646614   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:04.646651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.186491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:07.200510   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:07.200575   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:07.239519   69333 cri.go:89] found id: ""
	I0927 01:45:07.239542   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.239553   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:07.239562   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:07.239751   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:07.276820   69333 cri.go:89] found id: ""
	I0927 01:45:07.276854   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.276863   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:07.276870   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:07.276932   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:07.312580   69333 cri.go:89] found id: ""
	I0927 01:45:07.312604   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.312613   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:07.312619   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:07.312676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:05.520930   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.020001   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:05.807739   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.807914   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:06.042390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.542149   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.542438   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.350763   69333 cri.go:89] found id: ""
	I0927 01:45:07.350788   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.350799   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:07.350806   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:07.350861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:07.385347   69333 cri.go:89] found id: ""
	I0927 01:45:07.385376   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.385383   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:07.385389   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:07.385439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:07.420665   69333 cri.go:89] found id: ""
	I0927 01:45:07.420696   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.420708   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:07.420718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:07.420768   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:07.453707   69333 cri.go:89] found id: ""
	I0927 01:45:07.453737   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.453746   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:07.453752   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:07.453806   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:07.489467   69333 cri.go:89] found id: ""
	I0927 01:45:07.489497   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.489508   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:07.489520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:07.489531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:07.569464   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:07.569496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.609123   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:07.609160   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:07.659556   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:07.659590   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:07.673163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:07.673191   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:07.751340   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.252511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:10.266651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:10.266706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:10.304131   69333 cri.go:89] found id: ""
	I0927 01:45:10.304160   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.304171   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:10.304178   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:10.304243   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:10.339267   69333 cri.go:89] found id: ""
	I0927 01:45:10.339295   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.339321   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:10.339329   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:10.339397   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:10.376268   69333 cri.go:89] found id: ""
	I0927 01:45:10.376298   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.376308   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:10.376319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:10.376380   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:10.413944   69333 cri.go:89] found id: ""
	I0927 01:45:10.413970   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.413978   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:10.413984   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:10.414033   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:10.449205   69333 cri.go:89] found id: ""
	I0927 01:45:10.449226   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.449234   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:10.449240   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:10.449289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:10.487927   69333 cri.go:89] found id: ""
	I0927 01:45:10.487947   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.487955   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:10.487961   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:10.488018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:10.525062   69333 cri.go:89] found id: ""
	I0927 01:45:10.525085   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.525095   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:10.525102   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:10.525163   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:10.560718   69333 cri.go:89] found id: ""
	I0927 01:45:10.560768   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.560779   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:10.560790   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:10.560803   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:10.641755   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.641781   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:10.641796   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:10.719775   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:10.719807   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:10.761952   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:10.761978   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:10.815296   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:10.815330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:10.023849   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.520577   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.307967   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.807872   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:14.808602   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:13.041469   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:15.036533   69234 pod_ready.go:82] duration metric: took 4m0.000873058s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	E0927 01:45:15.036568   69234 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:45:15.036588   69234 pod_ready.go:39] duration metric: took 4m6.530278971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:15.036645   69234 kubeadm.go:597] duration metric: took 4m16.375010355s to restartPrimaryControlPlane
	W0927 01:45:15.036713   69234 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:15.036743   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:13.330300   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:13.343840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:13.343893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:13.378904   69333 cri.go:89] found id: ""
	I0927 01:45:13.378933   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.378944   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:13.378952   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:13.379010   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:13.417375   69333 cri.go:89] found id: ""
	I0927 01:45:13.417403   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.417415   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:13.417422   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:13.417482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:13.456265   69333 cri.go:89] found id: ""
	I0927 01:45:13.456291   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.456302   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:13.456310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:13.456358   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:13.502205   69333 cri.go:89] found id: ""
	I0927 01:45:13.502229   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.502240   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:13.502247   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:13.502310   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:13.543617   69333 cri.go:89] found id: ""
	I0927 01:45:13.543642   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.543652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:13.543660   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:13.543723   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:13.580268   69333 cri.go:89] found id: ""
	I0927 01:45:13.580295   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.580305   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:13.580313   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:13.580374   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:13.616681   69333 cri.go:89] found id: ""
	I0927 01:45:13.616705   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.616713   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:13.616718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:13.616765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:13.653389   69333 cri.go:89] found id: ""
	I0927 01:45:13.653412   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.653420   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:13.653430   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:13.653442   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:13.666511   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:13.666534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:13.742282   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:13.742300   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:13.742311   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:13.825800   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:13.825836   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:13.876345   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:13.876376   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.429245   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:16.443286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:16.443366   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:16.481601   69333 cri.go:89] found id: ""
	I0927 01:45:16.481626   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.481637   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:16.481645   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:16.481703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:16.513626   69333 cri.go:89] found id: ""
	I0927 01:45:16.513652   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.513659   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:16.513665   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:16.513710   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:16.552531   69333 cri.go:89] found id: ""
	I0927 01:45:16.552565   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.552574   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:16.552580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:16.552636   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:16.587252   69333 cri.go:89] found id: ""
	I0927 01:45:16.587282   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.587294   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:16.587316   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:16.587377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:16.628376   69333 cri.go:89] found id: ""
	I0927 01:45:16.628401   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.628410   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:16.628417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:16.628482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:16.669603   69333 cri.go:89] found id: ""
	I0927 01:45:16.669639   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.669651   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:16.669658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:16.669731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:16.705581   69333 cri.go:89] found id: ""
	I0927 01:45:16.705607   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.705618   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:16.705626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:16.705682   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:16.740710   69333 cri.go:89] found id: ""
	I0927 01:45:16.740735   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.740743   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:16.740759   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:16.740771   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.791025   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:16.791060   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:16.805990   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:16.806023   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:16.878313   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:16.878331   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:16.878346   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:16.966228   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:16.966269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:14.521852   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:16.522127   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:17.307853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.308018   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.512044   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:19.526801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:19.526862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:19.562063   69333 cri.go:89] found id: ""
	I0927 01:45:19.562089   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.562098   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:19.562104   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:19.562159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:19.598600   69333 cri.go:89] found id: ""
	I0927 01:45:19.598626   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.598634   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:19.598642   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:19.598712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:19.632544   69333 cri.go:89] found id: ""
	I0927 01:45:19.632564   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.632572   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:19.632577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:19.632635   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:19.671676   69333 cri.go:89] found id: ""
	I0927 01:45:19.671703   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.671713   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:19.671721   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:19.671779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:19.710321   69333 cri.go:89] found id: ""
	I0927 01:45:19.710351   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.710362   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:19.710370   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:19.710438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:19.746252   69333 cri.go:89] found id: ""
	I0927 01:45:19.746277   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.746288   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:19.746295   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:19.746354   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:19.783089   69333 cri.go:89] found id: ""
	I0927 01:45:19.783112   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.783121   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:19.783126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:19.783189   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:19.821090   69333 cri.go:89] found id: ""
	I0927 01:45:19.821117   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.821126   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:19.821134   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:19.821145   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:19.873539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:19.873575   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:19.888446   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:19.888471   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:19.958009   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:19.958034   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:19.958050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:20.037552   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:20.037587   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:19.022216   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.520606   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.808178   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:23.808273   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:22.579288   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:22.592789   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:22.592846   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:22.628148   69333 cri.go:89] found id: ""
	I0927 01:45:22.628178   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.628186   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:22.628193   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:22.628240   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:22.664162   69333 cri.go:89] found id: ""
	I0927 01:45:22.664186   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.664194   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:22.664200   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:22.664253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:22.702077   69333 cri.go:89] found id: ""
	I0927 01:45:22.702104   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.702115   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:22.702123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:22.702183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:22.739657   69333 cri.go:89] found id: ""
	I0927 01:45:22.739690   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.739700   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:22.739708   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:22.739773   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:22.774109   69333 cri.go:89] found id: ""
	I0927 01:45:22.774137   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.774148   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:22.774174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:22.774229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:22.809648   69333 cri.go:89] found id: ""
	I0927 01:45:22.809671   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.809678   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:22.809684   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:22.809729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:22.842598   69333 cri.go:89] found id: ""
	I0927 01:45:22.842620   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.842627   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:22.842632   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:22.842677   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:22.877336   69333 cri.go:89] found id: ""
	I0927 01:45:22.877364   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.877374   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:22.877382   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:22.877393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:22.930364   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:22.930395   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:22.944174   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:22.944200   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:23.025495   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:23.025520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:23.025534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:23.101813   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:23.101850   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:25.644577   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:25.657820   69333 kubeadm.go:597] duration metric: took 4m3.277962916s to restartPrimaryControlPlane
	W0927 01:45:25.657898   69333 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:25.657929   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:26.111439   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:26.128279   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:26.138354   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:26.148116   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:26.148132   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:26.148170   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:26.157965   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:26.158012   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:26.168349   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:26.177624   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:26.177692   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:26.187584   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.196800   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:26.196856   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.205894   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:26.215316   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:26.215365   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:26.224989   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:26.299149   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:45:26.299261   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:26.451113   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:26.451282   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:26.451457   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:45:26.637960   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:26.640682   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:26.640782   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:26.640865   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:26.640972   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:26.641099   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:26.641233   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:26.641317   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:26.641425   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:26.641525   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:26.641633   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:26.641901   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:26.642000   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:26.642080   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:26.782585   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:27.008743   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:27.103701   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:27.217999   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:27.238810   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:27.240191   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:27.240240   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:27.375215   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:23.521301   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.020002   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.021215   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.306744   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.308577   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:27.376992   69333 out.go:235]   - Booting up control plane ...
	I0927 01:45:27.377123   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:27.386897   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:27.387959   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:27.388954   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:27.392182   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:45:30.520717   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.019981   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:30.808251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.307139   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.020640   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.520220   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.307871   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.808604   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:41.262067   69234 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225299595s)
	I0927 01:45:41.262142   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:41.294256   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:41.304403   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:41.314288   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:41.314310   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:41.314357   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:41.323280   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:41.323335   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:41.332637   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:41.341492   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:41.341552   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:41.352259   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.361190   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:41.361244   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.370863   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:41.379674   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:41.379735   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:41.389169   69234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:41.434391   69234 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:45:41.434565   69234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:41.537712   69234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:41.537813   69234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:41.537951   69234 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:45:41.546906   69234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:41.548799   69234 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:41.548882   69234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:41.548959   69234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:41.549049   69234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:41.549133   69234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:41.549239   69234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:41.549328   69234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:41.549433   69234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:41.549531   69234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:41.549619   69234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:41.549691   69234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:41.549741   69234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:41.549813   69234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:41.594579   69234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:41.703970   69234 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:45:41.813013   69234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:41.875564   69234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:42.025627   69234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:42.026325   69234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:42.028784   69234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:39.521118   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.020563   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:40.307764   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.307974   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:44.808238   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.030464   69234 out.go:235]   - Booting up control plane ...
	I0927 01:45:42.030566   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:42.030674   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:42.031152   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:42.050207   69234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:42.058709   69234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:42.058766   69234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:42.192498   69234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:45:42.192628   69234 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:45:42.694670   69234 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.189114ms
	I0927 01:45:42.694812   69234 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:45:48.195975   69234 kubeadm.go:310] [api-check] The API server is healthy after 5.501110293s
	I0927 01:45:48.210406   69234 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:45:48.231678   69234 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:45:48.257669   69234 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:45:48.257859   69234 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-245911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:45:48.271429   69234 kubeadm.go:310] [bootstrap-token] Using token: bqds0t.3lt1vhl3zjbrkom6
	I0927 01:45:44.021019   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:46.520158   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:48.272667   69234 out.go:235]   - Configuring RBAC rules ...
	I0927 01:45:48.272775   69234 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:45:48.278773   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:45:48.290868   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:45:48.297879   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:45:48.302011   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:45:48.306217   69234 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:45:48.604161   69234 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:45:49.041505   69234 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:45:49.604127   69234 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:45:49.604867   69234 kubeadm.go:310] 
	I0927 01:45:49.604981   69234 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:45:49.605008   69234 kubeadm.go:310] 
	I0927 01:45:49.605136   69234 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:45:49.605147   69234 kubeadm.go:310] 
	I0927 01:45:49.605188   69234 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:45:49.605266   69234 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:45:49.605363   69234 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:45:49.605373   69234 kubeadm.go:310] 
	I0927 01:45:49.605446   69234 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:45:49.605455   69234 kubeadm.go:310] 
	I0927 01:45:49.605524   69234 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:45:49.605537   69234 kubeadm.go:310] 
	I0927 01:45:49.605612   69234 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:45:49.605725   69234 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:45:49.605826   69234 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:45:49.605836   69234 kubeadm.go:310] 
	I0927 01:45:49.605913   69234 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:45:49.606010   69234 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:45:49.606032   69234 kubeadm.go:310] 
	I0927 01:45:49.606130   69234 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606252   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:45:49.606276   69234 kubeadm.go:310] 	--control-plane 
	I0927 01:45:49.606282   69234 kubeadm.go:310] 
	I0927 01:45:49.606404   69234 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:45:49.606421   69234 kubeadm.go:310] 
	I0927 01:45:49.606546   69234 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606692   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:45:49.607952   69234 kubeadm.go:310] W0927 01:45:41.410128    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608322   69234 kubeadm.go:310] W0927 01:45:41.412009    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608494   69234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:45:49.608518   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:45:49.608527   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:45:49.610175   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:45:47.307006   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.307374   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.611562   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:45:49.622683   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:45:49.642326   69234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:45:49.642366   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:49.642393   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-245911 minikube.k8s.io/updated_at=2024_09_27T01_45_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=embed-certs-245911 minikube.k8s.io/primary=true
	I0927 01:45:49.677602   69234 ops.go:34] apiserver oom_adj: -16
	I0927 01:45:49.854320   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:50.355392   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:48.520718   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.520908   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.020638   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.854364   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.355074   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.855077   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.354509   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.855229   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.355204   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.854829   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:54.066909   69234 kubeadm.go:1113] duration metric: took 4.424595735s to wait for elevateKubeSystemPrivileges
	I0927 01:45:54.066954   69234 kubeadm.go:394] duration metric: took 4m55.454404762s to StartCluster
	I0927 01:45:54.066978   69234 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.067071   69234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:45:54.069732   69234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.070048   69234 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:45:54.070126   69234 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:45:54.070235   69234 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-245911"
	I0927 01:45:54.070257   69234 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-245911"
	I0927 01:45:54.070261   69234 addons.go:69] Setting default-storageclass=true in profile "embed-certs-245911"
	I0927 01:45:54.070270   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:45:54.070270   69234 addons.go:69] Setting metrics-server=true in profile "embed-certs-245911"
	I0927 01:45:54.070286   69234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-245911"
	I0927 01:45:54.070296   69234 addons.go:234] Setting addon metrics-server=true in "embed-certs-245911"
	W0927 01:45:54.070305   69234 addons.go:243] addon metrics-server should already be in state true
	W0927 01:45:54.070266   69234 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070750   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070790   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070753   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070850   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070889   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070936   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.071693   69234 out.go:177] * Verifying Kubernetes components...
	I0927 01:45:54.073034   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:45:54.087559   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38159
	I0927 01:45:54.087567   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0927 01:45:54.088061   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088074   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0927 01:45:54.088183   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088412   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088551   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088573   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088635   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088655   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088852   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088874   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088929   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089023   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.089193   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089585   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089610   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089627   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.089639   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.092683   69234 addons.go:234] Setting addon default-storageclass=true in "embed-certs-245911"
	W0927 01:45:54.092705   69234 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:45:54.092729   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.093065   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.093102   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.106496   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0927 01:45:54.106952   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.107486   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.107513   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.108098   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.108297   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.109993   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.110532   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0927 01:45:54.111066   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.111688   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.111708   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.111909   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0927 01:45:54.112156   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112338   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.112740   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.112751   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.112832   69234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:45:54.112953   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.113345   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.113372   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.114353   69234 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.114372   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:45:54.114392   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.114596   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.116175   69234 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:45:51.806801   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.808476   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:54.117315   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:45:54.117326   69234 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:45:54.117341   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.120242   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.120881   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.120903   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121161   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121224   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121452   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.121658   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.121747   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.121944   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121960   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.121677   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.122386   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.122518   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.122695   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.135920   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0927 01:45:54.136247   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.136682   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.136696   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.136971   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.137163   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.138640   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.138903   69234 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.138919   69234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:45:54.138936   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.141420   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141786   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.141803   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141966   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.142132   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.142235   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.142308   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.325790   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:45:54.375616   69234 node_ready.go:35] waiting up to 6m0s for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386626   69234 node_ready.go:49] node "embed-certs-245911" has status "Ready":"True"
	I0927 01:45:54.386646   69234 node_ready.go:38] duration metric: took 10.995073ms for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386654   69234 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:54.394605   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:45:54.458245   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.501624   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:45:54.501655   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:45:54.508690   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.548168   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:45:54.548194   69234 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:45:54.615565   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:54.615591   69234 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:45:54.655649   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:55.488749   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488849   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.488803   69234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030519069s)
	I0927 01:45:55.488934   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488942   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489266   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489282   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489290   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489298   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489377   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489393   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489401   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489409   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489511   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489528   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489540   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491047   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491082   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.491093   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.535220   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.535240   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.535604   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.535625   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.627642   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.627663   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628020   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.628025   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628047   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628055   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.628062   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628294   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628311   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628322   69234 addons.go:475] Verifying addon metrics-server=true in "embed-certs-245911"
	I0927 01:45:55.629802   69234 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0927 01:45:55.022054   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:57.520749   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:56.307903   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.807972   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:55.631245   69234 addons.go:510] duration metric: took 1.561128577s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0927 01:45:56.401813   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.900688   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:59.521353   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.014813   69534 pod_ready.go:82] duration metric: took 4m0.000584515s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:00.014858   69534 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:46:00.014878   69534 pod_ready.go:39] duration metric: took 4m13.043107791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:00.014903   69534 kubeadm.go:597] duration metric: took 4m20.409702758s to restartPrimaryControlPlane
	W0927 01:46:00.014956   69534 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:46:00.014980   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:46:00.808408   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.808672   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.901714   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.902242   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:03.401910   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.401936   69234 pod_ready.go:82] duration metric: took 9.007296678s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.401948   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908874   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.908896   69234 pod_ready.go:82] duration metric: took 506.941437ms for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908918   69234 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914117   69234 pod_ready.go:93] pod "etcd-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.914135   69234 pod_ready.go:82] duration metric: took 5.210078ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914142   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918778   69234 pod_ready.go:93] pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.918801   69234 pod_ready.go:82] duration metric: took 4.651828ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918812   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.923979   69234 pod_ready.go:93] pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.923996   69234 pod_ready.go:82] duration metric: took 5.176348ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.924004   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199586   69234 pod_ready.go:93] pod "kube-proxy-5l299" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.199612   69234 pod_ready.go:82] duration metric: took 275.601068ms for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199621   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598852   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.598880   69234 pod_ready.go:82] duration metric: took 399.251298ms for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598890   69234 pod_ready.go:39] duration metric: took 10.212226661s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:04.598905   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:04.598962   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:04.615194   69234 api_server.go:72] duration metric: took 10.545103977s to wait for apiserver process to appear ...
	I0927 01:46:04.615225   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:04.615248   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:46:04.621164   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:46:04.622001   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:04.622022   69234 api_server.go:131] duration metric: took 6.789717ms to wait for apiserver health ...
	I0927 01:46:04.622032   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:04.802641   69234 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:04.802674   69234 system_pods.go:61] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:04.802681   69234 system_pods.go:61] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:04.802687   69234 system_pods.go:61] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:04.802693   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:04.802699   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:04.802703   69234 system_pods.go:61] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:04.802708   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:04.802717   69234 system_pods.go:61] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:04.802722   69234 system_pods.go:61] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:04.802735   69234 system_pods.go:74] duration metric: took 180.694209ms to wait for pod list to return data ...
	I0927 01:46:04.802747   69234 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:04.999578   69234 default_sa.go:45] found service account: "default"
	I0927 01:46:04.999603   69234 default_sa.go:55] duration metric: took 196.845725ms for default service account to be created ...
	I0927 01:46:04.999612   69234 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:05.201201   69234 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:05.201228   69234 system_pods.go:89] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:05.201233   69234 system_pods.go:89] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:05.201237   69234 system_pods.go:89] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:05.201241   69234 system_pods.go:89] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:05.201244   69234 system_pods.go:89] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:05.201248   69234 system_pods.go:89] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:05.201251   69234 system_pods.go:89] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:05.201256   69234 system_pods.go:89] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:05.201260   69234 system_pods.go:89] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:05.201268   69234 system_pods.go:126] duration metric: took 201.651734ms to wait for k8s-apps to be running ...
	I0927 01:46:05.201275   69234 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:05.201315   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:05.216216   69234 system_svc.go:56] duration metric: took 14.930697ms WaitForService to wait for kubelet
	I0927 01:46:05.216248   69234 kubeadm.go:582] duration metric: took 11.146166369s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:05.216271   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:05.400667   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:05.400695   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:05.400708   69234 node_conditions.go:105] duration metric: took 184.432904ms to run NodePressure ...
	I0927 01:46:05.400719   69234 start.go:241] waiting for startup goroutines ...
	I0927 01:46:05.400729   69234 start.go:246] waiting for cluster config update ...
	I0927 01:46:05.400743   69234 start.go:255] writing updated cluster config ...
	I0927 01:46:05.401134   69234 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:05.452606   69234 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:05.454631   69234 out.go:177] * Done! kubectl is now configured to use "embed-certs-245911" cluster and "default" namespace by default
	I0927 01:46:05.307371   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.807981   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.393548   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:46:07.394304   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:07.394505   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:10.307311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.308085   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:14.308664   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.395176   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:12.395434   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:16.807116   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:18.807652   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:21.307348   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:23.807597   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:26.304067   69534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.289064717s)
	I0927 01:46:26.304150   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:26.341383   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:46:26.365985   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:46:26.382056   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:46:26.382082   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:46:26.382133   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:46:26.405820   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:46:26.405881   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:46:26.416355   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:46:26.426710   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:46:26.426759   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:46:26.438110   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.448631   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:46:26.448691   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.458453   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:46:26.467677   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:46:26.467724   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:46:26.478333   69534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:46:26.528377   69534 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:46:26.528432   69534 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:46:26.653799   69534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:46:26.653904   69534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:46:26.654029   69534 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:46:26.666791   69534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:46:22.395858   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:22.396073   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:26.668660   69534 out.go:235]   - Generating certificates and keys ...
	I0927 01:46:26.668739   69534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:46:26.668803   69534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:46:26.668918   69534 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:46:26.669012   69534 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:46:26.669103   69534 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:46:26.669178   69534 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:46:26.669308   69534 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:46:26.669628   69534 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:46:26.669868   69534 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:46:26.670086   69534 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:46:26.670284   69534 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:46:26.670395   69534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:46:26.885345   69534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:46:27.061416   69534 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:46:27.347409   69534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:46:27.477340   69534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:46:27.607326   69534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:46:27.607882   69534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:46:27.612459   69534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:46:27.614167   69534 out.go:235]   - Booting up control plane ...
	I0927 01:46:27.614285   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:46:27.614388   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:46:27.614482   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:46:27.635734   69534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:46:27.642550   69534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:46:27.642634   69534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:46:27.778616   69534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:46:27.778763   69534 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:46:28.280057   69534 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.328597ms
	I0927 01:46:28.280185   69534 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:46:25.808311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:28.307033   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:33.781107   69534 kubeadm.go:310] [api-check] The API server is healthy after 5.501552407s
	I0927 01:46:33.796672   69534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:46:33.809900   69534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:46:33.845968   69534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:46:33.846194   69534 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-368295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:46:33.862294   69534 kubeadm.go:310] [bootstrap-token] Using token: qmzafx.lhyo0l65zryygr2x
	I0927 01:46:30.308436   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809032   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809057   68676 pod_ready.go:82] duration metric: took 4m0.007962887s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:32.809066   68676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:46:32.809075   68676 pod_ready.go:39] duration metric: took 4m5.043455674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:32.809088   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:32.809115   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:32.809175   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:32.871610   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:32.871629   68676 cri.go:89] found id: ""
	I0927 01:46:32.871636   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:32.871682   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.878223   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:32.878296   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:32.925139   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:32.925173   68676 cri.go:89] found id: ""
	I0927 01:46:32.925182   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:32.925238   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.929961   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:32.930023   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:32.969777   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:32.969799   68676 cri.go:89] found id: ""
	I0927 01:46:32.969807   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:32.969854   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.979003   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:32.979088   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:33.029458   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:33.029532   68676 cri.go:89] found id: ""
	I0927 01:46:33.029546   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:33.029609   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.036703   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:33.036777   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:33.085041   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:33.085058   68676 cri.go:89] found id: ""
	I0927 01:46:33.085065   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:33.085125   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.090305   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:33.090372   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:33.136837   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.136857   68676 cri.go:89] found id: ""
	I0927 01:46:33.136865   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:33.136913   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.141483   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:33.141543   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:33.182913   68676 cri.go:89] found id: ""
	I0927 01:46:33.182939   68676 logs.go:276] 0 containers: []
	W0927 01:46:33.182950   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:33.182956   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:33.183002   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:33.237031   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:33.237055   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.237061   68676 cri.go:89] found id: ""
	I0927 01:46:33.237070   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:33.237121   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.241969   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.246733   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:33.246760   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:33.294096   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:33.294128   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.357981   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:33.358029   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.397465   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:33.397500   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:33.922831   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:33.922869   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:34.067117   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:34.067152   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:34.082191   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:34.082218   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:34.126416   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:34.126454   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:34.166714   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:34.166744   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:34.206601   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:34.206642   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:34.254352   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:34.254383   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:34.293318   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:34.293347   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:34.340365   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:34.340398   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:33.863782   69534 out.go:235]   - Configuring RBAC rules ...
	I0927 01:46:33.863922   69534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:46:33.871841   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:46:33.880047   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:46:33.884688   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:46:33.892057   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:46:33.895787   69534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:46:34.190553   69534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:46:34.619922   69534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:46:35.188452   69534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:46:35.189552   69534 kubeadm.go:310] 
	I0927 01:46:35.189661   69534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:46:35.189683   69534 kubeadm.go:310] 
	I0927 01:46:35.189791   69534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:46:35.189806   69534 kubeadm.go:310] 
	I0927 01:46:35.189845   69534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:46:35.189925   69534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:46:35.190002   69534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:46:35.190016   69534 kubeadm.go:310] 
	I0927 01:46:35.190095   69534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:46:35.190104   69534 kubeadm.go:310] 
	I0927 01:46:35.190181   69534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:46:35.190193   69534 kubeadm.go:310] 
	I0927 01:46:35.190264   69534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:46:35.190387   69534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:46:35.190484   69534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:46:35.190498   69534 kubeadm.go:310] 
	I0927 01:46:35.190593   69534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:46:35.190681   69534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:46:35.190691   69534 kubeadm.go:310] 
	I0927 01:46:35.190793   69534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.190948   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:46:35.191002   69534 kubeadm.go:310] 	--control-plane 
	I0927 01:46:35.191021   69534 kubeadm.go:310] 
	I0927 01:46:35.191134   69534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:46:35.191155   69534 kubeadm.go:310] 
	I0927 01:46:35.191281   69534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.191427   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:46:35.192564   69534 kubeadm.go:310] W0927 01:46:26.480521    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.192905   69534 kubeadm.go:310] W0927 01:46:26.481198    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.193078   69534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:46:35.193093   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:46:35.193102   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:46:35.194656   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:46:35.195835   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:46:35.207162   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:46:35.225999   69534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-368295 minikube.k8s.io/updated_at=2024_09_27T01_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=default-k8s-diff-port-368295 minikube.k8s.io/primary=true
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.258203   69534 ops.go:34] apiserver oom_adj: -16
	I0927 01:46:35.425367   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.926435   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.425611   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.925505   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.426329   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.926184   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.425745   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.925572   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.425831   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.508783   69534 kubeadm.go:1113] duration metric: took 4.282764601s to wait for elevateKubeSystemPrivileges
	I0927 01:46:39.508817   69534 kubeadm.go:394] duration metric: took 4m59.95903234s to StartCluster
	I0927 01:46:39.508838   69534 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.508930   69534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:46:39.510771   69534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.511005   69534 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:46:39.511071   69534 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:46:39.511194   69534 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511214   69534 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511230   69534 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511261   69534 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511276   69534 addons.go:243] addon metrics-server should already be in state true
	I0927 01:46:39.511325   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511243   69534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-368295"
	I0927 01:46:39.511225   69534 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511515   69534 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:46:39.511538   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511223   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511818   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511844   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511877   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511905   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.513051   69534 out.go:177] * Verifying Kubernetes components...
	I0927 01:46:39.514530   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:46:39.528031   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0927 01:46:39.528033   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0927 01:46:39.528446   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528603   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528997   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529022   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529085   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529101   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529210   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0927 01:46:39.529421   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.529721   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.529743   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.529724   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.530304   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.530358   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.530308   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.530423   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.530762   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.531337   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.531389   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.533286   69534 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.533306   69534 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:46:39.533333   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.533656   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.533692   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.546657   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I0927 01:46:39.546881   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0927 01:46:39.547298   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547327   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547842   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547860   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.547860   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547876   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.548220   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548239   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.548481   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.550160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550445   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0927 01:46:39.550744   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.551173   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.551195   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.551525   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.552620   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.552652   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.552838   69534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:46:39.552916   69534 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:46:36.914500   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:36.932340   68676 api_server.go:72] duration metric: took 4m14.883408931s to wait for apiserver process to appear ...
	I0927 01:46:36.932368   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:36.932407   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:36.932465   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:36.967757   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:36.967780   68676 cri.go:89] found id: ""
	I0927 01:46:36.967787   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:36.967832   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:36.972025   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:36.972105   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:37.018403   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.018431   68676 cri.go:89] found id: ""
	I0927 01:46:37.018448   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:37.018515   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.022868   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:37.022925   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:37.062443   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.062466   68676 cri.go:89] found id: ""
	I0927 01:46:37.062474   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:37.062534   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.066617   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:37.066674   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:37.101462   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.101489   68676 cri.go:89] found id: ""
	I0927 01:46:37.101500   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:37.101557   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.105564   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:37.105620   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:37.143692   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.143719   68676 cri.go:89] found id: ""
	I0927 01:46:37.143729   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:37.143775   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.148405   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:37.148484   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:37.184914   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.184943   68676 cri.go:89] found id: ""
	I0927 01:46:37.184954   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:37.185013   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.189486   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:37.189553   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:37.235389   68676 cri.go:89] found id: ""
	I0927 01:46:37.235416   68676 logs.go:276] 0 containers: []
	W0927 01:46:37.235424   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:37.235429   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:37.235480   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:37.276239   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.276266   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.276272   68676 cri.go:89] found id: ""
	I0927 01:46:37.276282   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:37.276338   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.280381   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.284423   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:37.284440   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.319790   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:37.319816   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.358818   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:37.358843   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.398137   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:37.398168   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.458672   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:37.458720   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:37.476148   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:37.476184   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:37.604190   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:37.604223   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:37.652633   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:37.652671   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.701240   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:37.701273   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.739555   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:37.739583   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.781721   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:37.781750   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:38.209361   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:38.209399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:38.261628   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:38.261658   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:39.554328   69534 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:39.554342   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:46:39.554362   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.554446   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:46:39.554456   69534 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:46:39.554469   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.557886   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.557982   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558121   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558269   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558350   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558369   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558466   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558620   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558690   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.558740   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558797   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.559026   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.559136   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.569570   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0927 01:46:39.569981   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.570364   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.570383   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.570746   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.570890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.572537   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.572779   69534 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.572795   69534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:46:39.572815   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.575104   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.575435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575595   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.575751   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.575844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.575960   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.784965   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:46:39.820986   69534 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829323   69534 node_ready.go:49] node "default-k8s-diff-port-368295" has status "Ready":"True"
	I0927 01:46:39.829346   69534 node_ready.go:38] duration metric: took 8.333848ms for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829358   69534 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:39.836143   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:39.940697   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.955239   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:46:39.955264   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:46:40.076199   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:40.080720   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:46:40.080746   69534 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:46:40.182698   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.182720   69534 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:46:40.219231   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.431480   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.431859   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.431875   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.431875   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.431889   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431898   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.432126   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.432146   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.432189   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442440   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.442468   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.442761   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442785   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.442815   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.044597   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.044627   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.044964   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.045013   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045021   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.045033   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.045041   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.045254   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045267   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.427791   69534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.208520131s)
	I0927 01:46:41.427843   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.427859   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428175   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.428184   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428196   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428205   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.428213   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428477   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428490   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428500   69534 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-368295"
	I0927 01:46:41.430399   69534 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0927 01:46:41.431795   69534 addons.go:510] duration metric: took 1.920729429s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0927 01:46:41.844911   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:40.832698   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:46:40.838244   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:46:40.839252   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:40.839270   68676 api_server.go:131] duration metric: took 3.906895557s to wait for apiserver health ...
	I0927 01:46:40.839277   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:40.839312   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:40.839373   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:40.879726   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:40.879753   68676 cri.go:89] found id: ""
	I0927 01:46:40.879763   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:40.879822   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.884233   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:40.884301   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:40.936189   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:40.936216   68676 cri.go:89] found id: ""
	I0927 01:46:40.936226   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:40.936289   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.940805   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:40.940885   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:40.978662   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:40.978683   68676 cri.go:89] found id: ""
	I0927 01:46:40.978693   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:40.978757   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.983357   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:40.983428   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:41.027134   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.027160   68676 cri.go:89] found id: ""
	I0927 01:46:41.027170   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:41.027229   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.031909   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:41.031986   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:41.077539   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.077568   68676 cri.go:89] found id: ""
	I0927 01:46:41.077577   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:41.077638   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.082237   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:41.082314   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:41.122413   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:41.122437   68676 cri.go:89] found id: ""
	I0927 01:46:41.122446   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:41.122501   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.127807   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:41.127872   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:41.174287   68676 cri.go:89] found id: ""
	I0927 01:46:41.174320   68676 logs.go:276] 0 containers: []
	W0927 01:46:41.174331   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:41.174339   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:41.174397   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:41.213192   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.213219   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:41.213225   68676 cri.go:89] found id: ""
	I0927 01:46:41.213234   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:41.213298   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.218168   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.227165   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:41.227194   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.269538   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:41.269571   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:41.691900   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:41.691943   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:41.709639   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:41.709682   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:41.829334   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:41.829366   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:41.886517   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:41.886552   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.933012   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:41.933035   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.973881   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:41.973921   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:42.032592   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:42.032628   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:42.087817   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:42.087856   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:42.162770   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:42.162808   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:42.213367   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:42.213399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:42.254937   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:42.254963   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:44.804112   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:46:44.804146   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.804153   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.804158   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.804162   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.804167   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.804171   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.804180   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.804186   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.804196   68676 system_pods.go:74] duration metric: took 3.964911623s to wait for pod list to return data ...
	I0927 01:46:44.804208   68676 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:44.807883   68676 default_sa.go:45] found service account: "default"
	I0927 01:46:44.807907   68676 default_sa.go:55] duration metric: took 3.689984ms for default service account to be created ...
	I0927 01:46:44.807917   68676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:44.812135   68676 system_pods.go:86] 8 kube-system pods found
	I0927 01:46:44.812161   68676 system_pods.go:89] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.812167   68676 system_pods.go:89] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.812174   68676 system_pods.go:89] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.812178   68676 system_pods.go:89] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.812185   68676 system_pods.go:89] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.812190   68676 system_pods.go:89] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.812200   68676 system_pods.go:89] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.812209   68676 system_pods.go:89] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.812222   68676 system_pods.go:126] duration metric: took 4.297317ms to wait for k8s-apps to be running ...
	I0927 01:46:44.812234   68676 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:44.812282   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:44.827911   68676 system_svc.go:56] duration metric: took 15.668154ms WaitForService to wait for kubelet
	I0927 01:46:44.827946   68676 kubeadm.go:582] duration metric: took 4m22.779012486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:44.827964   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:44.830688   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:44.830707   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:44.830716   68676 node_conditions.go:105] duration metric: took 2.747178ms to run NodePressure ...
	I0927 01:46:44.830725   68676 start.go:241] waiting for startup goroutines ...
	I0927 01:46:44.830732   68676 start.go:246] waiting for cluster config update ...
	I0927 01:46:44.830742   68676 start.go:255] writing updated cluster config ...
	I0927 01:46:44.830990   68676 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:44.881491   68676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:44.884307   68676 out.go:177] * Done! kubectl is now configured to use "no-preload-521072" cluster and "default" namespace by default
	I0927 01:46:42.397038   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:42.397331   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:43.845539   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:46.343584   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:48.842505   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.842527   69534 pod_ready.go:82] duration metric: took 9.006354643s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.842537   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846753   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.846771   69534 pod_ready.go:82] duration metric: took 4.228349ms for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846780   69534 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851234   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.851256   69534 pod_ready.go:82] duration metric: took 4.468727ms for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851265   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855648   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.855669   69534 pod_ready.go:82] duration metric: took 4.398439ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855678   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860882   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.860898   69534 pod_ready.go:82] duration metric: took 5.214278ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860906   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241149   69534 pod_ready.go:93] pod "kube-proxy-kqjdq" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.241180   69534 pod_ready.go:82] duration metric: took 380.266777ms for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241192   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642403   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.642437   69534 pod_ready.go:82] duration metric: took 401.235952ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642448   69534 pod_ready.go:39] duration metric: took 9.813073515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:49.642465   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:49.642518   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:49.658847   69534 api_server.go:72] duration metric: took 10.147811957s to wait for apiserver process to appear ...
	I0927 01:46:49.658877   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:49.658898   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:46:49.665899   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:46:49.666844   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:49.666867   69534 api_server.go:131] duration metric: took 7.982491ms to wait for apiserver health ...
	I0927 01:46:49.666876   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:49.843377   69534 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:49.843402   69534 system_pods.go:61] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:49.843408   69534 system_pods.go:61] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:49.843413   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:49.843417   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:49.843420   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:49.843425   69534 system_pods.go:61] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:49.843429   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:49.843437   69534 system_pods.go:61] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:49.843443   69534 system_pods.go:61] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:49.843454   69534 system_pods.go:74] duration metric: took 176.572041ms to wait for pod list to return data ...
	I0927 01:46:49.843466   69534 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:50.041031   69534 default_sa.go:45] found service account: "default"
	I0927 01:46:50.041053   69534 default_sa.go:55] duration metric: took 197.577565ms for default service account to be created ...
	I0927 01:46:50.041062   69534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:50.243807   69534 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:50.243834   69534 system_pods.go:89] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:50.243840   69534 system_pods.go:89] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:50.243845   69534 system_pods.go:89] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:50.243849   69534 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:50.243853   69534 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:50.243856   69534 system_pods.go:89] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:50.243860   69534 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:50.243866   69534 system_pods.go:89] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:50.243869   69534 system_pods.go:89] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:50.243879   69534 system_pods.go:126] duration metric: took 202.812704ms to wait for k8s-apps to be running ...
	I0927 01:46:50.243888   69534 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:50.243931   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:50.260175   69534 system_svc.go:56] duration metric: took 16.279433ms WaitForService to wait for kubelet
	I0927 01:46:50.260203   69534 kubeadm.go:582] duration metric: took 10.749173466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:50.260220   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:50.441020   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:50.441044   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:50.441052   69534 node_conditions.go:105] duration metric: took 180.827321ms to run NodePressure ...
	I0927 01:46:50.441062   69534 start.go:241] waiting for startup goroutines ...
	I0927 01:46:50.441081   69534 start.go:246] waiting for cluster config update ...
	I0927 01:46:50.441091   69534 start.go:255] writing updated cluster config ...
	I0927 01:46:50.441338   69534 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:50.492229   69534 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:50.494198   69534 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-368295" cluster and "default" namespace by default
	I0927 01:47:22.398756   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:47:22.399035   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:47:22.399051   69333 kubeadm.go:310] 
	I0927 01:47:22.399125   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:47:22.399167   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:47:22.399176   69333 kubeadm.go:310] 
	I0927 01:47:22.399242   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:47:22.399326   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:47:22.399452   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:47:22.399464   69333 kubeadm.go:310] 
	I0927 01:47:22.399627   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:47:22.399702   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:47:22.399750   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:47:22.399763   69333 kubeadm.go:310] 
	I0927 01:47:22.399908   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:47:22.400001   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:47:22.400014   69333 kubeadm.go:310] 
	I0927 01:47:22.400109   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:47:22.400218   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:47:22.400331   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:47:22.400406   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:47:22.400414   69333 kubeadm.go:310] 
	I0927 01:47:22.401157   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:47:22.401273   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:47:22.401342   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:47:22.401458   69333 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:47:22.401498   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:47:22.863316   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:47:22.878664   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:47:22.889118   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:47:22.889135   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:47:22.889173   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:47:22.898966   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:47:22.899035   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:47:22.911280   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:47:22.920628   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:47:22.920677   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:47:22.929860   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.938794   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:47:22.938839   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.947982   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:47:22.956785   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:47:22.956837   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:47:22.966186   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:47:23.039915   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:47:23.040017   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:47:23.189097   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:47:23.189274   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:47:23.189395   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:47:23.400731   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:47:23.402659   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:47:23.402776   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:47:23.402855   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:47:23.402959   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:47:23.403040   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:47:23.403162   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:47:23.403349   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:47:23.403935   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:47:23.404260   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:47:23.404563   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:47:23.404896   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:47:23.405050   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:47:23.405121   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:47:23.466908   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:47:23.717009   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:47:23.766225   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:47:23.961488   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:47:23.987846   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:47:23.988724   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:47:23.988790   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:47:24.130550   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:47:24.132276   69333 out.go:235]   - Booting up control plane ...
	I0927 01:47:24.132386   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:47:24.146415   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:47:24.147664   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:47:24.148443   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:47:24.151623   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:48:04.153587   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:48:04.153934   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:04.154129   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:09.154634   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:09.154883   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:19.155638   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:19.155844   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:39.156224   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:39.156429   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155507   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:49:19.155754   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155779   69333 kubeadm.go:310] 
	I0927 01:49:19.155872   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:49:19.155947   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:49:19.155958   69333 kubeadm.go:310] 
	I0927 01:49:19.156026   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:49:19.156077   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:49:19.156230   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:49:19.156242   69333 kubeadm.go:310] 
	I0927 01:49:19.156379   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:49:19.156434   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:49:19.156486   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:49:19.156506   69333 kubeadm.go:310] 
	I0927 01:49:19.156628   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:49:19.156756   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:49:19.156775   69333 kubeadm.go:310] 
	I0927 01:49:19.156925   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:49:19.157022   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:49:19.157112   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:49:19.157191   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:49:19.157202   69333 kubeadm.go:310] 
	I0927 01:49:19.158023   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:49:19.158149   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:49:19.158277   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:49:19.158357   69333 kubeadm.go:394] duration metric: took 7m56.829434682s to StartCluster
	I0927 01:49:19.158404   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:49:19.158477   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:49:19.200705   69333 cri.go:89] found id: ""
	I0927 01:49:19.200729   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.200736   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:49:19.200742   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:49:19.200791   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:49:19.240252   69333 cri.go:89] found id: ""
	I0927 01:49:19.240274   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.240285   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:49:19.240292   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:49:19.240352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:49:19.275802   69333 cri.go:89] found id: ""
	I0927 01:49:19.275826   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.275834   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:49:19.275840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:49:19.275894   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:49:19.309317   69333 cri.go:89] found id: ""
	I0927 01:49:19.309342   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.309350   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:49:19.309357   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:49:19.309414   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:49:19.344778   69333 cri.go:89] found id: ""
	I0927 01:49:19.344806   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.344817   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:49:19.344823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:49:19.344882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:49:19.379394   69333 cri.go:89] found id: ""
	I0927 01:49:19.379426   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.379438   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:49:19.379445   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:49:19.379502   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:49:19.415349   69333 cri.go:89] found id: ""
	I0927 01:49:19.415376   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.415384   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:49:19.415390   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:49:19.415438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:49:19.453357   69333 cri.go:89] found id: ""
	I0927 01:49:19.453381   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.453389   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:49:19.453397   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:49:19.453409   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:49:19.530384   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:49:19.530405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:49:19.530423   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:49:19.643418   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:49:19.643453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:49:19.688825   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:49:19.688861   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:49:19.745945   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:49:19.745983   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0927 01:49:19.762685   69333 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:49:19.762739   69333 out.go:270] * 
	W0927 01:49:19.762791   69333 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.762804   69333 out.go:270] * 
	W0927 01:49:19.763605   69333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:49:19.767393   69333 out.go:201] 
	W0927 01:49:19.768622   69333 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.768671   69333 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:49:19.768690   69333 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:49:19.771036   69333 out.go:201] 
	
	
	==> CRI-O <==
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.647032341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402152647007450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8d70860-9402-4a27-9d44-058027a3948e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.648356641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96d58d53-e71c-481d-8c94-e8fc58201c5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.648429476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96d58d53-e71c-481d-8c94-e8fc58201c5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.648704751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6,PodSandboxId:34fa08e76381d5327f3585326f92be6d8fc179c1f42c20ebf6b3d91fe34b05d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401601483569196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa7a054-2eee-45ee-a9bc-c305e53e1273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9,PodSandboxId:ba37dc9e76c9b1b073ad88bd2f0327d0107ea2988867fcfd436453e58c15c2a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600836728632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qkbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2725448-3f80-45d8-8bd8-49dcf8878f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8,PodSandboxId:ea14b34bae4582ee7cd6eaedaaf8b1e7cd6ed9998d9ceb5a983aeb64c39d944c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600735091485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4d7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c84ab26c-2e13-437c-b059-43c8ca1d90c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa,PodSandboxId:793fc6b52aba3570288274985bdc54c5c1715cdda1b12f9f71c794d1bb5cb74a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727401599779649068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqjdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b96945-0ffe-404f-a0d5-f8729d4248ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2,PodSandboxId:153f4fb3af3a95a081cf27afc322ac73af61a0ccead9d40a87a20ee3759a47dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401588807163672,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14efa3785d77c2217257464e631112ed,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14,PodSandboxId:87de1a0c59c4698bfcafa26276982dff9b5c8e057763658c6ce7bfd43124b2cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401588810564289,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da34e1017bd5e89c00d6e00079b023aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77,PodSandboxId:b81443ee03f2b8afb9050aeff14b824cc85ed282a92ff35b958bf4d879d6c364,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401588816734968,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743,PodSandboxId:e56247841d77007485939577db8603556d040e3696252d1c8d3a9bdb8955dda3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401588688037358,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b3e5798605efaeb253e94a59600958,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703,PodSandboxId:852e5f549b2abe254a2f88a1f75ccfcf2afa0b21bd48a28979e7bc70d0599e75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401301990748617,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96d58d53-e71c-481d-8c94-e8fc58201c5f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.689973976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=735be853-8425-4250-a318-952963c82fae name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.690066667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=735be853-8425-4250-a318-952963c82fae name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.691268672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ad0482c-b70b-420f-a2ca-61fa61bcb37f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.691915161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402152691893446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ad0482c-b70b-420f-a2ca-61fa61bcb37f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.692922102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=700583df-593d-4a3c-9e17-392dd94bcd46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.693062415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=700583df-593d-4a3c-9e17-392dd94bcd46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.693631641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6,PodSandboxId:34fa08e76381d5327f3585326f92be6d8fc179c1f42c20ebf6b3d91fe34b05d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401601483569196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa7a054-2eee-45ee-a9bc-c305e53e1273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9,PodSandboxId:ba37dc9e76c9b1b073ad88bd2f0327d0107ea2988867fcfd436453e58c15c2a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600836728632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qkbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2725448-3f80-45d8-8bd8-49dcf8878f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8,PodSandboxId:ea14b34bae4582ee7cd6eaedaaf8b1e7cd6ed9998d9ceb5a983aeb64c39d944c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600735091485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4d7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c84ab26c-2e13-437c-b059-43c8ca1d90c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa,PodSandboxId:793fc6b52aba3570288274985bdc54c5c1715cdda1b12f9f71c794d1bb5cb74a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727401599779649068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqjdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b96945-0ffe-404f-a0d5-f8729d4248ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2,PodSandboxId:153f4fb3af3a95a081cf27afc322ac73af61a0ccead9d40a87a20ee3759a47dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401588807163672,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14efa3785d77c2217257464e631112ed,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14,PodSandboxId:87de1a0c59c4698bfcafa26276982dff9b5c8e057763658c6ce7bfd43124b2cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401588810564289,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da34e1017bd5e89c00d6e00079b023aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77,PodSandboxId:b81443ee03f2b8afb9050aeff14b824cc85ed282a92ff35b958bf4d879d6c364,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401588816734968,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743,PodSandboxId:e56247841d77007485939577db8603556d040e3696252d1c8d3a9bdb8955dda3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401588688037358,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b3e5798605efaeb253e94a59600958,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703,PodSandboxId:852e5f549b2abe254a2f88a1f75ccfcf2afa0b21bd48a28979e7bc70d0599e75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401301990748617,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=700583df-593d-4a3c-9e17-392dd94bcd46 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.733661749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07d63938-8ad4-4eb9-aa0f-c7bc055bb3ed name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.733734489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07d63938-8ad4-4eb9-aa0f-c7bc055bb3ed name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.734886577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bdbb64f-883d-4483-961c-60188568cc5a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.735666581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402152735577227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bdbb64f-883d-4483-961c-60188568cc5a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.736597460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9222d44-c79d-475f-a484-9b7cb3745e98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.736742743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9222d44-c79d-475f-a484-9b7cb3745e98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.737031984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6,PodSandboxId:34fa08e76381d5327f3585326f92be6d8fc179c1f42c20ebf6b3d91fe34b05d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401601483569196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa7a054-2eee-45ee-a9bc-c305e53e1273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9,PodSandboxId:ba37dc9e76c9b1b073ad88bd2f0327d0107ea2988867fcfd436453e58c15c2a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600836728632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qkbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2725448-3f80-45d8-8bd8-49dcf8878f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8,PodSandboxId:ea14b34bae4582ee7cd6eaedaaf8b1e7cd6ed9998d9ceb5a983aeb64c39d944c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600735091485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4d7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c84ab26c-2e13-437c-b059-43c8ca1d90c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa,PodSandboxId:793fc6b52aba3570288274985bdc54c5c1715cdda1b12f9f71c794d1bb5cb74a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727401599779649068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqjdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b96945-0ffe-404f-a0d5-f8729d4248ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2,PodSandboxId:153f4fb3af3a95a081cf27afc322ac73af61a0ccead9d40a87a20ee3759a47dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401588807163672,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14efa3785d77c2217257464e631112ed,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14,PodSandboxId:87de1a0c59c4698bfcafa26276982dff9b5c8e057763658c6ce7bfd43124b2cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401588810564289,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da34e1017bd5e89c00d6e00079b023aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77,PodSandboxId:b81443ee03f2b8afb9050aeff14b824cc85ed282a92ff35b958bf4d879d6c364,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401588816734968,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743,PodSandboxId:e56247841d77007485939577db8603556d040e3696252d1c8d3a9bdb8955dda3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401588688037358,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b3e5798605efaeb253e94a59600958,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703,PodSandboxId:852e5f549b2abe254a2f88a1f75ccfcf2afa0b21bd48a28979e7bc70d0599e75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401301990748617,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9222d44-c79d-475f-a484-9b7cb3745e98 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.777261005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e6179d7-2447-4365-8e3c-f884849e6fde name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.777356207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e6179d7-2447-4365-8e3c-f884849e6fde name=/runtime.v1.RuntimeService/Version
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.778772710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16ae2270-ddda-4bb3-85c8-901015c88b22 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.779363134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402152779331603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16ae2270-ddda-4bb3-85c8-901015c88b22 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.779932743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dbe41f4-6976-4bf8-b3c9-61ad6f25d01d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.780050948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dbe41f4-6976-4bf8-b3c9-61ad6f25d01d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:55:52 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 01:55:52.780286590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6,PodSandboxId:34fa08e76381d5327f3585326f92be6d8fc179c1f42c20ebf6b3d91fe34b05d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401601483569196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa7a054-2eee-45ee-a9bc-c305e53e1273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9,PodSandboxId:ba37dc9e76c9b1b073ad88bd2f0327d0107ea2988867fcfd436453e58c15c2a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600836728632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qkbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2725448-3f80-45d8-8bd8-49dcf8878f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8,PodSandboxId:ea14b34bae4582ee7cd6eaedaaf8b1e7cd6ed9998d9ceb5a983aeb64c39d944c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600735091485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4d7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c84ab26c-2e13-437c-b059-43c8ca1d90c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa,PodSandboxId:793fc6b52aba3570288274985bdc54c5c1715cdda1b12f9f71c794d1bb5cb74a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727401599779649068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqjdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b96945-0ffe-404f-a0d5-f8729d4248ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2,PodSandboxId:153f4fb3af3a95a081cf27afc322ac73af61a0ccead9d40a87a20ee3759a47dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401588807163672,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14efa3785d77c2217257464e631112ed,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14,PodSandboxId:87de1a0c59c4698bfcafa26276982dff9b5c8e057763658c6ce7bfd43124b2cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401588810564289,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da34e1017bd5e89c00d6e00079b023aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77,PodSandboxId:b81443ee03f2b8afb9050aeff14b824cc85ed282a92ff35b958bf4d879d6c364,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401588816734968,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743,PodSandboxId:e56247841d77007485939577db8603556d040e3696252d1c8d3a9bdb8955dda3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401588688037358,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b3e5798605efaeb253e94a59600958,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703,PodSandboxId:852e5f549b2abe254a2f88a1f75ccfcf2afa0b21bd48a28979e7bc70d0599e75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401301990748617,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dbe41f4-6976-4bf8-b3c9-61ad6f25d01d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b79b5c0a010e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   34fa08e76381d       storage-provisioner
	493a3f26ca3a1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ba37dc9e76c9b       coredns-7c65d6cfc9-qkbzv
	c95c262cabaf3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ea14b34bae458       coredns-7c65d6cfc9-4d7pk
	a82c79f60ab5f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   793fc6b52aba3       kube-proxy-kqjdq
	317a14a66de31       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   b81443ee03f2b       kube-apiserver-default-k8s-diff-port-368295
	6a46b48d9fc2e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   87de1a0c59c46       kube-scheduler-default-k8s-diff-port-368295
	3ed8ae1ddd989       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   153f4fb3af3a9       etcd-default-k8s-diff-port-368295
	e2b78be2052d8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   e56247841d770       kube-controller-manager-default-k8s-diff-port-368295
	affe15a528d50       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   852e5f549b2ab       kube-apiserver-default-k8s-diff-port-368295
	
	
	==> coredns [493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-368295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-368295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=default-k8s-diff-port-368295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_46_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:46:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-368295
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:55:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:51:50 +0000   Fri, 27 Sep 2024 01:46:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:51:50 +0000   Fri, 27 Sep 2024 01:46:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:51:50 +0000   Fri, 27 Sep 2024 01:46:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:51:50 +0000   Fri, 27 Sep 2024 01:46:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.83
	  Hostname:    default-k8s-diff-port-368295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bbfae71b1224951a97a9b446656b7e1
	  System UUID:                6bbfae71-b122-4951-a97a-9b446656b7e1
	  Boot ID:                    272a7df4-1ae5-4214-850e-73a937c641bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4d7pk                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-qkbzv                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-default-k8s-diff-port-368295                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-368295             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-368295    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-kqjdq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-default-k8s-diff-port-368295             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-d85zg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s                  node-controller  Node default-k8s-diff-port-368295 event: Registered Node default-k8s-diff-port-368295 in Controller
	
	
	==> dmesg <==
	[  +0.039704] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.109217] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.628859] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.930049] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.060167] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060302] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.191836] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.161112] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.308731] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +4.229591] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.060343] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.858773] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[  +5.514392] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.107459] kauditd_printk_skb: 85 callbacks suppressed
	[Sep27 01:42] kauditd_printk_skb: 2 callbacks suppressed
	[Sep27 01:46] systemd-fstab-generator[2567]: Ignoring "noauto" option for root device
	[  +0.069429] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.515408] systemd-fstab-generator[2887]: Ignoring "noauto" option for root device
	[  +0.081312] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.310497] systemd-fstab-generator[3004]: Ignoring "noauto" option for root device
	[  +0.065663] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.156847] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2] <==
	{"level":"info","ts":"2024-09-27T01:46:29.298644Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T01:46:29.298864Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1706423cc6d0face","initial-advertise-peer-urls":["https://192.168.61.83:2380"],"listen-peer-urls":["https://192.168.61.83:2380"],"advertise-client-urls":["https://192.168.61.83:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.83:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T01:46:29.298964Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.83:2380"}
	{"level":"info","ts":"2024-09-27T01:46:29.302003Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.83:2380"}
	{"level":"info","ts":"2024-09-27T01:46:29.301273Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T01:46:29.936574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T01:46:29.936782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T01:46:29.936905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face received MsgPreVoteResp from 1706423cc6d0face at term 1"}
	{"level":"info","ts":"2024-09-27T01:46:29.937011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T01:46:29.937036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face received MsgVoteResp from 1706423cc6d0face at term 2"}
	{"level":"info","ts":"2024-09-27T01:46:29.937135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1706423cc6d0face became leader at term 2"}
	{"level":"info","ts":"2024-09-27T01:46:29.937161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1706423cc6d0face elected leader 1706423cc6d0face at term 2"}
	{"level":"info","ts":"2024-09-27T01:46:29.943918Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1706423cc6d0face","local-member-attributes":"{Name:default-k8s-diff-port-368295 ClientURLs:[https://192.168.61.83:2379]}","request-path":"/0/members/1706423cc6d0face/attributes","cluster-id":"bef7c63622dde9b5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:46:29.944044Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:46:29.944508Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:46:29.944717Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:46:29.946179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:46:29.950855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T01:46:29.950965Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:46:29.951103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:46:29.951070Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bef7c63622dde9b5","local-member-id":"1706423cc6d0face","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:46:29.951330Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:46:29.951412Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:46:29.949865Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T01:46:29.991446Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.83:2379"}
	
	
	==> kernel <==
	 01:55:53 up 14 min,  0 users,  load average: 0.19, 0.17, 0.11
	Linux default-k8s-diff-port-368295 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77] <==
	W0927 01:51:32.635593       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:51:32.635909       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:51:32.637117       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:51:32.637204       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 01:52:32.637648       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:52:32.637865       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 01:52:32.637681       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:52:32.638005       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:52:32.639141       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:52:32.639186       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 01:54:32.639929       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:54:32.640026       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 01:54:32.639956       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:54:32.640117       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:54:32.641435       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:54:32.641550       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703] <==
	W0927 01:46:22.052149       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.078885       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.084348       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.115591       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.146592       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.147882       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.160349       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.211615       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.216152       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.280957       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.313200       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.354748       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.399889       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.403363       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.419364       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.422921       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.425415       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.456920       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.557599       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.639781       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.852447       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.857169       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:23.527722       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:24.528614       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:25.915595       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743] <==
	E0927 01:50:38.625089       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:50:39.085124       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:51:08.631747       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:51:09.094069       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:51:38.639429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:51:39.102119       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:51:50.234328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-368295"
	E0927 01:52:08.646669       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:52:09.113205       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:52:26.532305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="267.159µs"
	E0927 01:52:38.653341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:52:39.122825       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:52:39.530390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="161.705µs"
	E0927 01:53:08.660171       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:53:09.130805       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:53:38.667612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:53:39.140273       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:54:08.674038       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:54:09.147795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:54:38.680440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:54:39.156755       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:55:08.689893       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:55:09.168581       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:55:38.697276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:55:39.177443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:46:40.091844       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:46:40.107626       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.83"]
	E0927 01:46:40.107732       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:46:40.184320       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:46:40.184353       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:46:40.184377       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:46:40.186873       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:46:40.187162       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:46:40.187174       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:46:40.206564       1 config.go:199] "Starting service config controller"
	I0927 01:46:40.206601       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:46:40.206660       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:46:40.206665       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:46:40.207220       1 config.go:328] "Starting node config controller"
	I0927 01:46:40.207227       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:46:40.307362       1 shared_informer.go:320] Caches are synced for node config
	I0927 01:46:40.307419       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:46:40.307522       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14] <==
	W0927 01:46:31.705014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 01:46:31.705052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:31.705448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:46:31.705535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.649278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 01:46:32.649328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.672416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:46:32.672539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.674971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 01:46:32.675017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.683626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 01:46:32.683675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.857182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 01:46:32.857287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.891119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:46:32.891235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:33.007204       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:46:33.007660       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 01:46:33.024206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 01:46:33.024295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:33.049815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:46:33.051590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:33.068223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:46:33.068273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 01:46:35.096130       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 01:54:35 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:54:35.512092    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 01:54:44 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:54:44.693772    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402084693308303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:44 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:54:44.693907    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402084693308303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:50 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:54:50.511785    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 01:54:54 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:54:54.695728    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402094695361500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:54:54 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:54:54.695755    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402094695361500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:04 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:04.513709    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 01:55:04 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:04.697223    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402104696921380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:04 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:04.697274    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402104696921380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:14 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:14.699528    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402114699160675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:14 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:14.699579    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402114699160675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:18 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:18.512156    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 01:55:24 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:24.701275    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402124700900938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:24 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:24.701664    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402124700900938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:33 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:33.511619    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 01:55:34 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:34.533729    2894 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 01:55:34 default-k8s-diff-port-368295 kubelet[2894]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 01:55:34 default-k8s-diff-port-368295 kubelet[2894]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 01:55:34 default-k8s-diff-port-368295 kubelet[2894]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 01:55:34 default-k8s-diff-port-368295 kubelet[2894]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 01:55:34 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:34.712774    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402134712034902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:34 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:34.712828    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402134712034902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:44 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:44.514645    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 01:55:44 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:44.715506    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402144715036212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 01:55:44 default-k8s-diff-port-368295 kubelet[2894]: E0927 01:55:44.715549    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402144715036212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6] <==
	I0927 01:46:41.611409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:46:41.638071       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:46:41.638128       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:46:41.677793       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:46:41.678206       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-368295_3498d87f-28a5-46e1-a14c-367c4949a525!
	I0927 01:46:41.680665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c1d1ddb1-990a-48fe-b592-04ca2cb062c6", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-368295_3498d87f-28a5-46e1-a14c-367c4949a525 became leader
	I0927 01:46:41.778698       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-368295_3498d87f-28a5-46e1-a14c-367c4949a525!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-368295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-d85zg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-368295 describe pod metrics-server-6867b74b74-d85zg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-368295 describe pod metrics-server-6867b74b74-d85zg: exit status 1 (60.810188ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-d85zg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-368295 describe pod metrics-server-6867b74b74-d85zg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
E0927 01:50:10.487061   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
E0927 01:53:01.245052   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
E0927 01:55:10.486806   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
E0927 01:56:04.317288   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
E0927 01:58:01.244708   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 2 (224.294007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-612261" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 2 (221.804014ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-612261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-612261 logs -n 25: (1.67613571s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| stop    | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:32 UTC |
	| start   | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:33 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-521072             | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-595331                              | cert-expiration-595331       | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| delete  | -p                                                     | disable-driver-mounts-630210 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | disable-driver-mounts-630210                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:35 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-245911            | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-368295  | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-521072                  | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-612261        | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-245911                 | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-612261             | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-368295       | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:37:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:37:48.335921   69534 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:37:48.336188   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336196   69534 out.go:358] Setting ErrFile to fd 2...
	I0927 01:37:48.336201   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336368   69534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:37:48.336901   69534 out.go:352] Setting JSON to false
	I0927 01:37:48.337754   69534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8413,"bootTime":1727392655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:37:48.337841   69534 start.go:139] virtualization: kvm guest
	I0927 01:37:48.340035   69534 out.go:177] * [default-k8s-diff-port-368295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:37:48.341151   69534 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:37:48.341211   69534 notify.go:220] Checking for updates...
	I0927 01:37:48.343607   69534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:37:48.344933   69534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:37:48.346113   69534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:37:48.347142   69534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:37:48.348261   69534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:37:48.349842   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:37:48.350212   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.350278   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.365272   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0927 01:37:48.365662   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.366137   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.366162   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.366548   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.366713   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.366938   69534 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:37:48.367236   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.367265   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.381678   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0927 01:37:48.382169   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.382627   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.382650   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.382911   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.383023   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.415092   69534 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:37:48.416340   69534 start.go:297] selected driver: kvm2
	I0927 01:37:48.416354   69534 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.416459   69534 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:37:48.417093   69534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.417164   69534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:37:48.432138   69534 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:37:48.432534   69534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:37:48.432563   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:37:48.432604   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:37:48.432635   69534 start.go:340] cluster config:
	{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.432737   69534 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.435057   69534 out.go:177] * Starting "default-k8s-diff-port-368295" primary control-plane node in "default-k8s-diff-port-368295" cluster
	I0927 01:37:48.436502   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:37:48.436543   69534 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 01:37:48.436557   69534 cache.go:56] Caching tarball of preloaded images
	I0927 01:37:48.436624   69534 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:37:48.436634   69534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 01:37:48.436718   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:37:48.436885   69534 start.go:360] acquireMachinesLock for default-k8s-diff-port-368295: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:37:50.823565   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:53.895575   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:59.975554   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:03.047567   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:09.127558   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:12.199592   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:18.279516   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:21.351643   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:27.435515   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:30.503604   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:36.583590   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:39.655593   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:45.735581   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:48.807587   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:54.887542   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:57.959601   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:04.039570   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:07.111555   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:13.191559   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:16.263625   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:22.343607   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:25.415561   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:31.495531   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:34.567598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:40.647577   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:43.719602   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:49.799620   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:52.871596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:58.951600   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:02.023635   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:08.103596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:11.175614   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:17.255583   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:20.327522   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:26.407598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:29.479580   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:32.484148   69234 start.go:364] duration metric: took 3m6.827897292s to acquireMachinesLock for "embed-certs-245911"
	I0927 01:40:32.484202   69234 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:32.484210   69234 fix.go:54] fixHost starting: 
	I0927 01:40:32.484708   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:32.484758   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:32.500356   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0927 01:40:32.500869   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:32.501356   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:40:32.501376   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:32.501678   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:32.501872   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:32.502014   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:40:32.503863   69234 fix.go:112] recreateIfNeeded on embed-certs-245911: state=Stopped err=<nil>
	I0927 01:40:32.503884   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	W0927 01:40:32.504047   69234 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:32.506829   69234 out.go:177] * Restarting existing kvm2 VM for "embed-certs-245911" ...
	I0927 01:40:32.481407   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:32.481445   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.481786   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:40:32.481815   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.482031   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:40:32.483999   68676 machine.go:96] duration metric: took 4m37.428764548s to provisionDockerMachine
	I0927 01:40:32.484048   68676 fix.go:56] duration metric: took 4m37.449461246s for fixHost
	I0927 01:40:32.484055   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 4m37.449534693s
	W0927 01:40:32.484075   68676 start.go:714] error starting host: provision: host is not running
	W0927 01:40:32.484176   68676 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0927 01:40:32.484183   68676 start.go:729] Will try again in 5 seconds ...
	I0927 01:40:32.508417   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Start
	I0927 01:40:32.508598   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring networks are active...
	I0927 01:40:32.509477   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network default is active
	I0927 01:40:32.509830   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network mk-embed-certs-245911 is active
	I0927 01:40:32.510208   69234 main.go:141] libmachine: (embed-certs-245911) Getting domain xml...
	I0927 01:40:32.510838   69234 main.go:141] libmachine: (embed-certs-245911) Creating domain...
	I0927 01:40:33.718381   69234 main.go:141] libmachine: (embed-certs-245911) Waiting to get IP...
	I0927 01:40:33.719223   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.719554   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.719611   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.719550   70125 retry.go:31] will retry after 265.21442ms: waiting for machine to come up
	I0927 01:40:33.986199   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.986700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.986728   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.986658   70125 retry.go:31] will retry after 308.926274ms: waiting for machine to come up
	I0927 01:40:34.297317   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.297734   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.297755   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.297697   70125 retry.go:31] will retry after 466.52815ms: waiting for machine to come up
	I0927 01:40:34.765171   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.765616   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.765643   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.765570   70125 retry.go:31] will retry after 510.417499ms: waiting for machine to come up
	I0927 01:40:35.277175   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.277547   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.277576   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.277488   70125 retry.go:31] will retry after 522.865286ms: waiting for machine to come up
	I0927 01:40:37.485696   68676 start.go:360] acquireMachinesLock for no-preload-521072: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:40:35.802177   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.802620   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.802646   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.802584   70125 retry.go:31] will retry after 611.490499ms: waiting for machine to come up
	I0927 01:40:36.415249   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:36.415733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:36.415793   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:36.415709   70125 retry.go:31] will retry after 744.420766ms: waiting for machine to come up
	I0927 01:40:37.161647   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:37.162076   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:37.162112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:37.162022   70125 retry.go:31] will retry after 1.464523837s: waiting for machine to come up
	I0927 01:40:38.627935   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:38.628275   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:38.628302   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:38.628237   70125 retry.go:31] will retry after 1.840524237s: waiting for machine to come up
	I0927 01:40:40.471433   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:40.471823   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:40.471851   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:40.471781   70125 retry.go:31] will retry after 1.9424331s: waiting for machine to come up
	I0927 01:40:42.416527   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:42.416978   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:42.417007   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:42.416935   70125 retry.go:31] will retry after 2.553410529s: waiting for machine to come up
	I0927 01:40:44.973083   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:44.973446   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:44.973465   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:44.973402   70125 retry.go:31] will retry after 3.286267983s: waiting for machine to come up
	I0927 01:40:48.260792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:48.261216   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:48.261241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:48.261179   70125 retry.go:31] will retry after 3.302667041s: waiting for machine to come up
	I0927 01:40:52.800240   69333 start.go:364] duration metric: took 3m25.347970249s to acquireMachinesLock for "old-k8s-version-612261"
	I0927 01:40:52.800310   69333 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:52.800317   69333 fix.go:54] fixHost starting: 
	I0927 01:40:52.800742   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:52.800800   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:52.818217   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0927 01:40:52.818644   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:52.819065   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:40:52.819086   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:52.819408   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:52.819544   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:40:52.819646   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetState
	I0927 01:40:52.820921   69333 fix.go:112] recreateIfNeeded on old-k8s-version-612261: state=Stopped err=<nil>
	I0927 01:40:52.820956   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	W0927 01:40:52.821110   69333 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:52.823209   69333 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-612261" ...
	I0927 01:40:51.567691   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568205   69234 main.go:141] libmachine: (embed-certs-245911) Found IP for machine: 192.168.39.158
	I0927 01:40:51.568241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has current primary IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568250   69234 main.go:141] libmachine: (embed-certs-245911) Reserving static IP address...
	I0927 01:40:51.568731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.568764   69234 main.go:141] libmachine: (embed-certs-245911) DBG | skip adding static IP to network mk-embed-certs-245911 - found existing host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"}
	I0927 01:40:51.568781   69234 main.go:141] libmachine: (embed-certs-245911) Reserved static IP address: 192.168.39.158
	I0927 01:40:51.568798   69234 main.go:141] libmachine: (embed-certs-245911) Waiting for SSH to be available...
	I0927 01:40:51.568806   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Getting to WaitForSSH function...
	I0927 01:40:51.570819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571139   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.571167   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571321   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH client type: external
	I0927 01:40:51.571370   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa (-rw-------)
	I0927 01:40:51.571401   69234 main.go:141] libmachine: (embed-certs-245911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:40:51.571414   69234 main.go:141] libmachine: (embed-certs-245911) DBG | About to run SSH command:
	I0927 01:40:51.571422   69234 main.go:141] libmachine: (embed-certs-245911) DBG | exit 0
	I0927 01:40:51.691525   69234 main.go:141] libmachine: (embed-certs-245911) DBG | SSH cmd err, output: <nil>: 
	I0927 01:40:51.691953   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetConfigRaw
	I0927 01:40:51.692573   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:51.695121   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695541   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.695572   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695871   69234 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/config.json ...
	I0927 01:40:51.696087   69234 machine.go:93] provisionDockerMachine start ...
	I0927 01:40:51.696109   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:51.696312   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.698740   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699086   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.699112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699229   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.699415   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699552   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699679   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.699810   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.699998   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.700011   69234 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:40:51.799534   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:40:51.799559   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799764   69234 buildroot.go:166] provisioning hostname "embed-certs-245911"
	I0927 01:40:51.799792   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.802464   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.802844   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802960   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.803131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803290   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803502   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.803672   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.803868   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.803888   69234 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-245911 && echo "embed-certs-245911" | sudo tee /etc/hostname
	I0927 01:40:51.917988   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-245911
	
	I0927 01:40:51.918019   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.920484   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.920800   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.920831   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.921041   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.921224   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921383   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921511   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.921693   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.921883   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.921901   69234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-245911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-245911/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-245911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:40:52.028582   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:52.028609   69234 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:40:52.028672   69234 buildroot.go:174] setting up certificates
	I0927 01:40:52.028686   69234 provision.go:84] configureAuth start
	I0927 01:40:52.028704   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:52.029001   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.031742   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032088   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.032117   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032273   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.034392   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.034754   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034905   69234 provision.go:143] copyHostCerts
	I0927 01:40:52.034956   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:40:52.034969   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:40:52.035042   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:40:52.035172   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:40:52.035185   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:40:52.035224   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:40:52.035319   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:40:52.035329   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:40:52.035363   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:40:52.035433   69234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-245911 san=[127.0.0.1 192.168.39.158 embed-certs-245911 localhost minikube]
	I0927 01:40:52.206591   69234 provision.go:177] copyRemoteCerts
	I0927 01:40:52.206657   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:40:52.206724   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.209445   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209770   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.209792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209995   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.210234   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.210416   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.210578   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.290176   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:40:52.313645   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:40:52.336446   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:40:52.359182   69234 provision.go:87] duration metric: took 330.481958ms to configureAuth
	I0927 01:40:52.359214   69234 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:40:52.359464   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:40:52.359551   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.362163   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362488   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.362513   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362670   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.362826   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.362976   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.363133   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.363334   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.363532   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.363553   69234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:40:52.574326   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:40:52.574354   69234 machine.go:96] duration metric: took 878.253718ms to provisionDockerMachine
	I0927 01:40:52.574368   69234 start.go:293] postStartSetup for "embed-certs-245911" (driver="kvm2")
	I0927 01:40:52.574381   69234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:40:52.574398   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.574688   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:40:52.574714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.577727   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578035   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.578060   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578227   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.578411   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.578555   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.578735   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.658636   69234 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:40:52.663048   69234 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:40:52.663077   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:40:52.663147   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:40:52.663223   69234 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:40:52.663322   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:40:52.673347   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:52.697092   69234 start.go:296] duration metric: took 122.71069ms for postStartSetup
	I0927 01:40:52.697126   69234 fix.go:56] duration metric: took 20.212915975s for fixHost
	I0927 01:40:52.697145   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.699817   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700173   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.700202   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700364   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.700558   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700735   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700921   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.701097   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.701269   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.701285   69234 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:40:52.800080   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401252.775762391
	
	I0927 01:40:52.800102   69234 fix.go:216] guest clock: 1727401252.775762391
	I0927 01:40:52.800111   69234 fix.go:229] Guest: 2024-09-27 01:40:52.775762391 +0000 UTC Remote: 2024-09-27 01:40:52.697129165 +0000 UTC m=+207.179045808 (delta=78.633226ms)
	I0927 01:40:52.800145   69234 fix.go:200] guest clock delta is within tolerance: 78.633226ms
	I0927 01:40:52.800152   69234 start.go:83] releasing machines lock for "embed-certs-245911", held for 20.315972034s
	I0927 01:40:52.800183   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.800495   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.803196   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803657   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.803700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803874   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804419   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804610   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804731   69234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:40:52.804771   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.804813   69234 ssh_runner.go:195] Run: cat /version.json
	I0927 01:40:52.804837   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.807320   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807346   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807680   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807759   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807807   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807916   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808070   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808150   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808262   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808331   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808384   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808468   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.808522   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.908963   69234 ssh_runner.go:195] Run: systemctl --version
	I0927 01:40:52.915158   69234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:40:53.067605   69234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:40:53.074171   69234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:40:53.074241   69234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:40:53.091718   69234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:40:53.091742   69234 start.go:495] detecting cgroup driver to use...
	I0927 01:40:53.091813   69234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:40:53.108730   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:40:53.122920   69234 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:40:53.122984   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:40:53.137487   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:40:53.152420   69234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:40:53.269491   69234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:40:53.417893   69234 docker.go:233] disabling docker service ...
	I0927 01:40:53.417951   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:40:53.442201   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:40:53.459920   69234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:40:53.589768   69234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:40:53.719203   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:40:53.733145   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:40:53.751853   69234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:40:53.751919   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.763230   69234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:40:53.763294   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.774864   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.786149   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.797167   69234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:40:53.808495   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.819285   69234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.838497   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.850490   69234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:40:53.860309   69234 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:40:53.860377   69234 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:40:53.875533   69234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:40:53.885752   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:54.014352   69234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:40:54.107866   69234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:40:54.107926   69234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:40:54.113206   69234 start.go:563] Will wait 60s for crictl version
	I0927 01:40:54.113256   69234 ssh_runner.go:195] Run: which crictl
	I0927 01:40:54.117229   69234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:40:54.156365   69234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:40:54.156459   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.183974   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.214440   69234 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:40:54.215714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:54.218624   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.218975   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:54.219013   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.219180   69234 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 01:40:54.223450   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:54.236761   69234 kubeadm.go:883] updating cluster {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:40:54.236923   69234 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:40:54.236989   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:54.276635   69234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:40:54.276708   69234 ssh_runner.go:195] Run: which lz4
	I0927 01:40:54.281055   69234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:40:54.285439   69234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:40:54.285472   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:40:52.824650   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .Start
	I0927 01:40:52.824802   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring networks are active...
	I0927 01:40:52.825590   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network default is active
	I0927 01:40:52.825908   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network mk-old-k8s-version-612261 is active
	I0927 01:40:52.826326   69333 main.go:141] libmachine: (old-k8s-version-612261) Getting domain xml...
	I0927 01:40:52.827108   69333 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:40:54.071322   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting to get IP...
	I0927 01:40:54.072357   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.072756   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.072821   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.072738   70279 retry.go:31] will retry after 264.648837ms: waiting for machine to come up
	I0927 01:40:54.339366   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.339799   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.339827   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.339731   70279 retry.go:31] will retry after 343.432635ms: waiting for machine to come up
	I0927 01:40:54.685260   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.685746   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.685780   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.685714   70279 retry.go:31] will retry after 455.276623ms: waiting for machine to come up
	I0927 01:40:55.142206   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.142679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.142708   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.142637   70279 retry.go:31] will retry after 419.074502ms: waiting for machine to come up
	I0927 01:40:55.563324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.565342   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.565368   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.565287   70279 retry.go:31] will retry after 587.161471ms: waiting for machine to come up
	I0927 01:40:56.154584   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.155182   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.155220   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.155109   70279 retry.go:31] will retry after 782.426926ms: waiting for machine to come up
	I0927 01:40:56.938784   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.939201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.939228   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.939132   70279 retry.go:31] will retry after 781.231902ms: waiting for machine to come up
	I0927 01:40:55.723619   69234 crio.go:462] duration metric: took 1.442589436s to copy over tarball
	I0927 01:40:55.723705   69234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:40:57.775673   69234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051936146s)
	I0927 01:40:57.775697   69234 crio.go:469] duration metric: took 2.052045538s to extract the tarball
	I0927 01:40:57.775704   69234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:40:57.812769   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:57.853219   69234 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:40:57.853240   69234 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:40:57.853248   69234 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0927 01:40:57.853354   69234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-245911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:40:57.853495   69234 ssh_runner.go:195] Run: crio config
	I0927 01:40:57.908273   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:40:57.908301   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:40:57.908322   69234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:40:57.908356   69234 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-245911 NodeName:embed-certs-245911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:40:57.908542   69234 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-245911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:40:57.908613   69234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:40:57.918923   69234 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:40:57.919021   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:40:57.928576   69234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0927 01:40:57.945515   69234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:40:57.962239   69234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0927 01:40:57.979722   69234 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0927 01:40:57.983709   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:57.996181   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:58.119502   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:40:58.137022   69234 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911 for IP: 192.168.39.158
	I0927 01:40:58.137048   69234 certs.go:194] generating shared ca certs ...
	I0927 01:40:58.137068   69234 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:40:58.137250   69234 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:40:58.137312   69234 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:40:58.137324   69234 certs.go:256] generating profile certs ...
	I0927 01:40:58.137444   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/client.key
	I0927 01:40:58.137522   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key.e289c840
	I0927 01:40:58.137574   69234 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key
	I0927 01:40:58.137731   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:40:58.137774   69234 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:40:58.137787   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:40:58.137819   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:40:58.137850   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:40:58.137883   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:40:58.137928   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:58.138551   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:40:58.179399   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:40:58.211297   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:40:58.245549   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:40:58.276837   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0927 01:40:58.313750   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:40:58.338145   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:40:58.361373   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:40:58.384790   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:40:58.407617   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:40:58.430621   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:40:58.453382   69234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:40:58.470177   69234 ssh_runner.go:195] Run: openssl version
	I0927 01:40:58.476280   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:40:58.489039   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493726   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493780   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.499856   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:40:58.511032   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:40:58.521694   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.525991   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.526031   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.531619   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:40:58.542017   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:40:58.552591   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557047   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557086   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.562874   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:40:58.574052   69234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:40:58.578537   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:40:58.584323   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:40:58.590033   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:40:58.596013   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:40:58.601572   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:40:58.606980   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:40:58.612554   69234 kubeadm.go:392] StartCluster: {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:40:58.612648   69234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:40:58.612704   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.649228   69234 cri.go:89] found id: ""
	I0927 01:40:58.649306   69234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:40:58.661599   69234 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:40:58.661628   69234 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:40:58.661688   69234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:40:58.671907   69234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:40:58.672851   69234 kubeconfig.go:125] found "embed-certs-245911" server: "https://192.168.39.158:8443"
	I0927 01:40:58.674753   69234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:40:58.684614   69234 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.158
	I0927 01:40:58.684643   69234 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:40:58.684652   69234 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:40:58.684715   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.726714   69234 cri.go:89] found id: ""
	I0927 01:40:58.726816   69234 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:40:58.743675   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:40:58.753456   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:40:58.753485   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:40:58.753535   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:40:58.762724   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:40:58.762821   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:40:58.772558   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:40:58.781732   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:40:58.781790   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:40:58.791109   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.800066   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:40:58.800127   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.809338   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:40:58.818214   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:40:58.818260   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:40:58.828049   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:40:58.837606   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:58.942395   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.758951   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.966377   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.036702   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.126663   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:00.126743   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:40:57.722147   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:57.722637   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:57.722657   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:57.722593   70279 retry.go:31] will retry after 1.223133601s: waiting for machine to come up
	I0927 01:40:58.947836   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:58.948362   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:58.948388   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:58.948326   70279 retry.go:31] will retry after 1.155368003s: waiting for machine to come up
	I0927 01:41:00.105812   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:00.106288   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:00.106356   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:00.106280   70279 retry.go:31] will retry after 2.324904017s: waiting for machine to come up
	I0927 01:41:00.627542   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.126971   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.626940   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.127478   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.176746   69234 api_server.go:72] duration metric: took 2.050081672s to wait for apiserver process to appear ...
	I0927 01:41:02.176775   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:02.176798   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:02.177442   69234 api_server.go:269] stopped: https://192.168.39.158:8443/healthz: Get "https://192.168.39.158:8443/healthz": dial tcp 192.168.39.158:8443: connect: connection refused
	I0927 01:41:02.677488   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.824718   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.824748   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:04.824763   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.850790   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.850820   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:05.177167   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.201660   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.201696   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:02.432597   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:02.433066   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:02.433096   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:02.433026   70279 retry.go:31] will retry after 2.598889471s: waiting for machine to come up
	I0927 01:41:05.034614   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:05.035001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:05.035023   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:05.034973   70279 retry.go:31] will retry after 3.064943329s: waiting for machine to come up
	I0927 01:41:05.677514   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.683506   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.683543   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.177064   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.181304   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.181339   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.676872   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.681269   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.681297   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.176902   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.181397   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.181425   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.677457   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.682057   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.682087   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:08.177696   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:08.181752   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:41:08.188257   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:08.188278   69234 api_server.go:131] duration metric: took 6.011495616s to wait for apiserver health ...
	I0927 01:41:08.188285   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:41:08.188291   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:08.190206   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:08.191584   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:08.202370   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:08.224843   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:08.234247   69234 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:08.234275   69234 system_pods.go:61] "coredns-7c65d6cfc9-f2vxv" [3eed941e-e943-490b-a0a8-d543cec18a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:08.234284   69234 system_pods.go:61] "etcd-embed-certs-245911" [f88581ff-3747-4fe5-a4a2-6259c3b4554e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:08.234291   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [3f1efb25-6e30-4d5f-baba-3e98b6fe531e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:08.234298   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [a624fc8d-fbe3-4b63-8a88-5f8069b21095] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:08.234302   69234 system_pods.go:61] "kube-proxy-pjf8v" [a1b76e67-803a-43fe-bff6-a4b0ddc246a1] Running
	I0927 01:41:08.234309   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [0f7c146b-e2b7-4110-b010-f4599d0da410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:08.234313   69234 system_pods.go:61] "metrics-server-6867b74b74-k8mdf" [6d1e68fb-5187-4bc6-abdb-44f598e351c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:08.234317   69234 system_pods.go:61] "storage-provisioner" [dc0a7806-bee8-4127-8218-b2e48fa8500b] Running
	I0927 01:41:08.234323   69234 system_pods.go:74] duration metric: took 9.462578ms to wait for pod list to return data ...
	I0927 01:41:08.234333   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:08.238433   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:08.238455   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:08.238468   69234 node_conditions.go:105] duration metric: took 4.128775ms to run NodePressure ...
	I0927 01:41:08.238483   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:08.502161   69234 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506267   69234 kubeadm.go:739] kubelet initialised
	I0927 01:41:08.506290   69234 kubeadm.go:740] duration metric: took 4.099692ms waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506299   69234 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:08.510964   69234 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.515262   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515279   69234 pod_ready.go:82] duration metric: took 4.294632ms for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.515286   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515298   69234 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.519627   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519641   69234 pod_ready.go:82] duration metric: took 4.313975ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.519648   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519653   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.523152   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523165   69234 pod_ready.go:82] duration metric: took 3.50412ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.523177   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523186   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.628811   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628847   69234 pod_ready.go:82] duration metric: took 105.648464ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.628859   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628868   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027358   69234 pod_ready.go:93] pod "kube-proxy-pjf8v" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:09.027383   69234 pod_ready.go:82] duration metric: took 398.507928ms for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027393   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.101834   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:08.102324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:08.102358   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:08.102283   70279 retry.go:31] will retry after 4.242138543s: waiting for machine to come up
	I0927 01:41:13.708458   69534 start.go:364] duration metric: took 3m25.271525685s to acquireMachinesLock for "default-k8s-diff-port-368295"
	I0927 01:41:13.708525   69534 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:13.708533   69534 fix.go:54] fixHost starting: 
	I0927 01:41:13.708923   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:13.708979   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:13.726306   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0927 01:41:13.726732   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:13.727228   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:41:13.727252   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:13.727579   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:13.727781   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:13.727975   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:41:13.729621   69534 fix.go:112] recreateIfNeeded on default-k8s-diff-port-368295: state=Stopped err=<nil>
	I0927 01:41:13.729657   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	W0927 01:41:13.729826   69534 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:13.731730   69534 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-368295" ...
	I0927 01:41:12.347378   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.347831   69333 main.go:141] libmachine: (old-k8s-version-612261) Found IP for machine: 192.168.72.129
	I0927 01:41:12.347855   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserving static IP address...
	I0927 01:41:12.347872   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has current primary IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.348468   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.348494   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserved static IP address: 192.168.72.129
	I0927 01:41:12.348507   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | skip adding static IP to network mk-old-k8s-version-612261 - found existing host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"}
	I0927 01:41:12.348518   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting for SSH to be available...
	I0927 01:41:12.348537   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Getting to WaitForSSH function...
	I0927 01:41:12.350917   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351287   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.351335   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351464   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH client type: external
	I0927 01:41:12.351485   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa (-rw-------)
	I0927 01:41:12.351516   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:12.351525   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | About to run SSH command:
	I0927 01:41:12.351533   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | exit 0
	I0927 01:41:12.471347   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:12.471724   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:41:12.472352   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.474886   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475299   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.475340   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475628   69333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:41:12.475857   69333 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:12.475879   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:12.476115   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.478594   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.478918   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.478945   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.479126   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.479340   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479536   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479695   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.479859   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.480093   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.480116   69333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:12.579536   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:12.579562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579785   69333 buildroot.go:166] provisioning hostname "old-k8s-version-612261"
	I0927 01:41:12.579798   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579965   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.582679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.583027   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583166   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.583372   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583727   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.583924   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.584169   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.584187   69333 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-612261 && echo "old-k8s-version-612261" | sudo tee /etc/hostname
	I0927 01:41:12.702223   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-612261
	
	I0927 01:41:12.702252   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.705201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705564   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.705601   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705817   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.706012   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706154   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706344   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.706538   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.706720   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.706738   69333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-612261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-612261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-612261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:12.816316   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:12.816343   69333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:12.816376   69333 buildroot.go:174] setting up certificates
	I0927 01:41:12.816386   69333 provision.go:84] configureAuth start
	I0927 01:41:12.816394   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.816678   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.819190   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819487   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.819511   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.821843   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822166   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.822203   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822382   69333 provision.go:143] copyHostCerts
	I0927 01:41:12.822453   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:12.822466   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:12.822533   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:12.822641   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:12.822650   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:12.822682   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:12.822756   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:12.822766   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:12.822792   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:12.822859   69333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-612261 san=[127.0.0.1 192.168.72.129 localhost minikube old-k8s-version-612261]
	I0927 01:41:13.054632   69333 provision.go:177] copyRemoteCerts
	I0927 01:41:13.054706   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:13.054740   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.057895   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058296   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.058329   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058478   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.058696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.058907   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.059062   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.146378   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:13.176435   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:41:13.208974   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 01:41:13.240179   69333 provision.go:87] duration metric: took 423.77487ms to configureAuth
	I0927 01:41:13.240211   69333 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:13.240412   69333 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:41:13.240498   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.243514   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.243963   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.243991   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.244174   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.244419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244641   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244838   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.245039   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.245263   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.245284   69333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:13.476519   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:13.476545   69333 machine.go:96] duration metric: took 1.000674334s to provisionDockerMachine
	I0927 01:41:13.476558   69333 start.go:293] postStartSetup for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:41:13.476574   69333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:13.476593   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.476914   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:13.476942   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.479326   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479662   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.479686   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479835   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.480027   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.480182   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.480337   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.563321   69333 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:13.567844   69333 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:13.567867   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:13.567929   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:13.568012   69333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:13.568109   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:13.578453   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:13.603888   69333 start.go:296] duration metric: took 127.316429ms for postStartSetup
	I0927 01:41:13.603924   69333 fix.go:56] duration metric: took 20.803606957s for fixHost
	I0927 01:41:13.603948   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.606500   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.606921   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.606949   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.607189   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.607419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607600   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607726   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.608048   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.608234   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.608245   69333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:13.708261   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401273.683707076
	
	I0927 01:41:13.708284   69333 fix.go:216] guest clock: 1727401273.683707076
	I0927 01:41:13.708293   69333 fix.go:229] Guest: 2024-09-27 01:41:13.683707076 +0000 UTC Remote: 2024-09-27 01:41:13.603929237 +0000 UTC m=+226.291347697 (delta=79.777839ms)
	I0927 01:41:13.708348   69333 fix.go:200] guest clock delta is within tolerance: 79.777839ms
	I0927 01:41:13.708357   69333 start.go:83] releasing machines lock for "old-k8s-version-612261", held for 20.90807118s
	I0927 01:41:13.708392   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.708665   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:13.711474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.711873   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.711905   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.712035   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712569   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712748   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712832   69333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:13.712878   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.712949   69333 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:13.712971   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.715681   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.715820   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716024   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716043   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716200   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716225   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716235   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716370   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716487   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716548   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716622   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716728   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716779   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.716859   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.826638   69333 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:13.832901   69333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:13.986132   69333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:13.992644   69333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:13.992728   69333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:14.008962   69333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:14.008991   69333 start.go:495] detecting cgroup driver to use...
	I0927 01:41:14.009051   69333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:14.025047   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:14.040807   69333 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:14.040857   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:14.055972   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:14.072654   69333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:14.210869   69333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:14.403536   69333 docker.go:233] disabling docker service ...
	I0927 01:41:14.403596   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:14.421549   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:14.436288   69333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:14.569634   69333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:14.701517   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:14.716794   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:14.740622   69333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:41:14.740685   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.756563   69333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:14.756626   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.768952   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.781314   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.793578   69333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:14.806302   69333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:14.822967   69333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:14.823036   69333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:14.837673   69333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:14.848486   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:14.988181   69333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:15.100581   69333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:15.100664   69333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:15.105816   69333 start.go:563] Will wait 60s for crictl version
	I0927 01:41:15.105883   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:15.110375   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:15.154944   69333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:15.155039   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.188172   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.220410   69333 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:41:11.033747   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:13.038930   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:15.035610   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:15.035636   69234 pod_ready.go:82] duration metric: took 6.008237321s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.035645   69234 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.221508   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:15.224474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.224855   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:15.224884   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.225126   69333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:15.229555   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:15.244862   69333 kubeadm.go:883] updating cluster {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:15.245007   69333 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:41:15.245070   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:15.298422   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:15.298501   69333 ssh_runner.go:195] Run: which lz4
	I0927 01:41:15.302771   69333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:15.307360   69333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:15.307398   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:41:17.053272   69333 crio.go:462] duration metric: took 1.750548806s to copy over tarball
	I0927 01:41:17.053354   69333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:13.732810   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Start
	I0927 01:41:13.732979   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring networks are active...
	I0927 01:41:13.733749   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network default is active
	I0927 01:41:13.734076   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network mk-default-k8s-diff-port-368295 is active
	I0927 01:41:13.734425   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Getting domain xml...
	I0927 01:41:13.734997   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Creating domain...
	I0927 01:41:15.073415   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting to get IP...
	I0927 01:41:15.074278   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074774   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074850   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.074757   70444 retry.go:31] will retry after 231.356774ms: waiting for machine to come up
	I0927 01:41:15.308474   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309058   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.308989   70444 retry.go:31] will retry after 252.762152ms: waiting for machine to come up
	I0927 01:41:15.563638   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564173   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564212   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.564130   70444 retry.go:31] will retry after 341.067908ms: waiting for machine to come up
	I0927 01:41:15.906735   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907138   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907168   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.907091   70444 retry.go:31] will retry after 385.816363ms: waiting for machine to come up
	I0927 01:41:16.294523   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295246   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295268   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.295192   70444 retry.go:31] will retry after 575.812339ms: waiting for machine to come up
	I0927 01:41:16.873050   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873574   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873601   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.873520   70444 retry.go:31] will retry after 661.914855ms: waiting for machine to come up
	I0927 01:41:17.537039   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537516   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537544   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:17.537467   70444 retry.go:31] will retry after 959.195147ms: waiting for machine to come up
	I0927 01:41:17.043983   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.543159   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:20.066231   69333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012846531s)
	I0927 01:41:20.066257   69333 crio.go:469] duration metric: took 3.012954388s to extract the tarball
	I0927 01:41:20.066265   69333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:20.112486   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:20.152620   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:20.152647   69333 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:20.152723   69333 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.152754   69333 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.152789   69333 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.152813   69333 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.152816   69333 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.152763   69333 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.152938   69333 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:41:20.152940   69333 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154747   69333 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.154752   69333 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.154886   69333 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.154925   69333 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.154930   69333 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.154934   69333 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:41:20.316172   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.316352   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:41:20.319986   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.331224   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.342010   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.355732   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.355739   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.446420   69333 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:41:20.446477   69333 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.446529   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.469134   69333 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:41:20.469183   69333 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.469231   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.470229   69333 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:41:20.470264   69333 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:41:20.470310   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.477952   69333 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:41:20.477991   69333 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.478034   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.519340   69333 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:41:20.519391   69333 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.519454   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538237   69333 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:41:20.538256   69333 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:41:20.538293   69333 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.538298   69333 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.538389   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.538438   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.538489   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656448   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.656508   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.656542   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.656573   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.656635   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.656704   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656740   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.818479   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.818494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.818581   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.878325   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:41:20.878480   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.878494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.878585   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.885061   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.885168   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.898628   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:41:20.994147   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:41:20.994175   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:41:20.994211   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:21.016210   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:41:21.016289   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:41:21.035051   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:41:21.374949   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:21.520726   69333 cache_images.go:92] duration metric: took 1.368058485s to LoadCachedImages
	W0927 01:41:21.520817   69333 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0927 01:41:21.520833   69333 kubeadm.go:934] updating node { 192.168.72.129 8443 v1.20.0 crio true true} ...
	I0927 01:41:21.520951   69333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-612261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:21.521035   69333 ssh_runner.go:195] Run: crio config
	I0927 01:41:21.571651   69333 cni.go:84] Creating CNI manager for ""
	I0927 01:41:21.571677   69333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:21.571688   69333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:21.571712   69333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-612261 NodeName:old-k8s-version-612261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:41:21.571882   69333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-612261"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:21.571958   69333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:41:21.582735   69333 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:21.582802   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:21.593329   69333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0927 01:41:21.615040   69333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:21.636564   69333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 01:41:21.657275   69333 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:21.661675   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:21.674587   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:21.814300   69333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:21.834133   69333 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261 for IP: 192.168.72.129
	I0927 01:41:21.834163   69333 certs.go:194] generating shared ca certs ...
	I0927 01:41:21.834182   69333 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:21.834380   69333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:21.834437   69333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:21.834450   69333 certs.go:256] generating profile certs ...
	I0927 01:41:21.834558   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key
	I0927 01:41:21.834630   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e
	I0927 01:41:21.834676   69333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key
	I0927 01:41:21.834819   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:21.834859   69333 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:21.834873   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:21.834904   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:21.834937   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:21.834973   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:21.835023   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:21.835864   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:21.866955   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:21.902991   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:21.928957   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:21.957505   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:41:21.984055   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:22.013191   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:22.041745   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:41:22.069680   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:22.104139   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:22.130348   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:22.157976   69333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:22.177818   69333 ssh_runner.go:195] Run: openssl version
	I0927 01:41:22.184389   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:22.196133   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201047   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201120   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.207245   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:22.219033   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:22.230331   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235000   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235054   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.240963   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:22.252022   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:22.263197   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268023   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268100   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.274086   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:22.285387   69333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:22.290487   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:22.296953   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:22.303095   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:22.310001   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:22.316346   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:22.322559   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:22.328931   69333 kubeadm.go:392] StartCluster: {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:22.329015   69333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:22.329081   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:18.498695   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499234   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:18.499187   70444 retry.go:31] will retry after 932.004828ms: waiting for machine to come up
	I0927 01:41:19.432487   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432885   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432912   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:19.432844   70444 retry.go:31] will retry after 1.595543978s: waiting for machine to come up
	I0927 01:41:21.030048   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030572   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030598   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:21.030526   70444 retry.go:31] will retry after 1.93010855s: waiting for machine to come up
	I0927 01:41:22.963833   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964303   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964334   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:22.964254   70444 retry.go:31] will retry after 2.81720725s: waiting for machine to come up
	I0927 01:41:21.757497   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:24.043965   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:22.368989   69333 cri.go:89] found id: ""
	I0927 01:41:22.369059   69333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:22.379818   69333 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:22.379841   69333 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:22.379897   69333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:22.392278   69333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:22.393236   69333 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:41:22.393856   69333 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-14935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-612261" cluster setting kubeconfig missing "old-k8s-version-612261" context setting]
	I0927 01:41:22.394733   69333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:22.404625   69333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:22.415376   69333 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.129
	I0927 01:41:22.415414   69333 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:22.415427   69333 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:22.415487   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:22.452749   69333 cri.go:89] found id: ""
	I0927 01:41:22.452829   69333 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:22.469164   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:22.480018   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:22.480038   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:22.480092   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:41:22.490501   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:22.490562   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:22.500330   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:41:22.509612   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:22.509681   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:22.520064   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.529864   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:22.529921   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.540563   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:41:22.556739   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:22.556797   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:22.572858   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:22.583366   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:22.709007   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.468461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.714890   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.865174   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.959048   69333 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:23.959140   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.460104   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.959462   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.460143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.959473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.460051   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.960121   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.784030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784429   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784456   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:25.784393   70444 retry.go:31] will retry after 2.844872797s: waiting for machine to come up
	I0927 01:41:26.544176   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:29.042297   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:27.459491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:27.959944   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.459636   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.959766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.459410   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.959439   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.460176   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.959810   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.459492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.959966   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.632445   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632905   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632930   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:28.632866   70444 retry.go:31] will retry after 3.566248996s: waiting for machine to come up
	I0927 01:41:32.200424   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200804   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Found IP for machine: 192.168.61.83
	I0927 01:41:32.200832   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has current primary IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200841   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserving static IP address...
	I0927 01:41:32.201137   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.201151   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserved static IP address: 192.168.61.83
	I0927 01:41:32.201164   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | skip adding static IP to network mk-default-k8s-diff-port-368295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"}
	I0927 01:41:32.201177   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Getting to WaitForSSH function...
	I0927 01:41:32.201185   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for SSH to be available...
	I0927 01:41:32.203258   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203542   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.203571   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203674   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH client type: external
	I0927 01:41:32.203704   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa (-rw-------)
	I0927 01:41:32.203743   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:32.203763   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | About to run SSH command:
	I0927 01:41:32.203783   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | exit 0
	I0927 01:41:32.327131   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:32.327499   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetConfigRaw
	I0927 01:41:32.328140   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.330387   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.330769   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.330801   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.331054   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:41:32.331257   69534 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:32.331279   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:32.331505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.333514   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333799   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.333825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333940   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.334101   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334267   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334359   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.334509   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.334700   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.334709   69534 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:32.439884   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:32.439921   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440126   69534 buildroot.go:166] provisioning hostname "default-k8s-diff-port-368295"
	I0927 01:41:32.440149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440346   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.443385   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443707   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.443742   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443917   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.444093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444266   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444427   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.444606   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.444793   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.444809   69534 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-368295 && echo "default-k8s-diff-port-368295" | sudo tee /etc/hostname
	I0927 01:41:32.570447   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-368295
	
	I0927 01:41:32.570479   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.573194   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573472   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.573512   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573699   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.573942   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574097   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.574430   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.574623   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.574647   69534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-368295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-368295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-368295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:32.693082   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:32.693107   69534 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:32.693140   69534 buildroot.go:174] setting up certificates
	I0927 01:41:32.693149   69534 provision.go:84] configureAuth start
	I0927 01:41:32.693160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.693407   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.696156   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696498   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.696522   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696693   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.698894   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699229   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.699257   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699399   69534 provision.go:143] copyHostCerts
	I0927 01:41:32.699451   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:32.699464   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:32.699530   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:32.699639   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:32.699653   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:32.699681   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:32.699751   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:32.699761   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:32.699785   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:32.699848   69534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-368295 san=[127.0.0.1 192.168.61.83 default-k8s-diff-port-368295 localhost minikube]
	I0927 01:41:32.887727   69534 provision.go:177] copyRemoteCerts
	I0927 01:41:32.887792   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:32.887825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.890435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890768   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.890797   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890956   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.891128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.891252   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.891373   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:32.973705   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:32.998434   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0927 01:41:33.023552   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:33.048884   69534 provision.go:87] duration metric: took 355.724209ms to configureAuth
	I0927 01:41:33.048910   69534 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:33.049080   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:33.049149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.051738   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052080   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.052133   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052364   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.052578   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052726   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.053031   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.053265   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.053283   69534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:33.292126   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:33.292148   69534 machine.go:96] duration metric: took 960.878234ms to provisionDockerMachine
	I0927 01:41:33.292159   69534 start.go:293] postStartSetup for "default-k8s-diff-port-368295" (driver="kvm2")
	I0927 01:41:33.292171   69534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:33.292188   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.292511   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:33.292539   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.295356   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.295759   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295936   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.296100   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.296314   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.296498   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.528391   68676 start.go:364] duration metric: took 56.042651871s to acquireMachinesLock for "no-preload-521072"
	I0927 01:41:33.528435   68676 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:33.528445   68676 fix.go:54] fixHost starting: 
	I0927 01:41:33.528858   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:33.528890   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:33.547391   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0927 01:41:33.547852   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:33.548343   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:41:33.548371   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:33.548713   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:33.548907   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:33.549064   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:41:33.550898   68676 fix.go:112] recreateIfNeeded on no-preload-521072: state=Stopped err=<nil>
	I0927 01:41:33.550923   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	W0927 01:41:33.551084   68676 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:33.553090   68676 out.go:177] * Restarting existing kvm2 VM for "no-preload-521072" ...
	I0927 01:41:33.554429   68676 main.go:141] libmachine: (no-preload-521072) Calling .Start
	I0927 01:41:33.554613   68676 main.go:141] libmachine: (no-preload-521072) Ensuring networks are active...
	I0927 01:41:33.555401   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network default is active
	I0927 01:41:33.555858   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network mk-no-preload-521072 is active
	I0927 01:41:33.556350   68676 main.go:141] libmachine: (no-preload-521072) Getting domain xml...
	I0927 01:41:33.557057   68676 main.go:141] libmachine: (no-preload-521072) Creating domain...
	I0927 01:41:34.830052   68676 main.go:141] libmachine: (no-preload-521072) Waiting to get IP...
	I0927 01:41:34.830807   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:34.831255   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:34.831340   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:34.831244   70637 retry.go:31] will retry after 267.615794ms: waiting for machine to come up
	I0927 01:41:33.378613   69534 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:33.383491   69534 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:33.383517   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:33.383590   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:33.383695   69534 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:33.383810   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:33.395134   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:33.420441   69534 start.go:296] duration metric: took 128.270045ms for postStartSetup
	I0927 01:41:33.420481   69534 fix.go:56] duration metric: took 19.711948387s for fixHost
	I0927 01:41:33.420505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.422860   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423170   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.423198   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423333   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.423517   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423676   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.423987   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.424139   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.424153   69534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:33.528250   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401293.484458762
	
	I0927 01:41:33.528271   69534 fix.go:216] guest clock: 1727401293.484458762
	I0927 01:41:33.528278   69534 fix.go:229] Guest: 2024-09-27 01:41:33.484458762 +0000 UTC Remote: 2024-09-27 01:41:33.420486926 +0000 UTC m=+225.118319167 (delta=63.971836ms)
	I0927 01:41:33.528297   69534 fix.go:200] guest clock delta is within tolerance: 63.971836ms
	I0927 01:41:33.528303   69534 start.go:83] releasing machines lock for "default-k8s-diff-port-368295", held for 19.819799777s
	I0927 01:41:33.528328   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.528623   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:33.531282   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531692   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.531724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531914   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532476   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532651   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532742   69534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:33.532784   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.532868   69534 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:33.532890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.535432   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535710   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.535843   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.536153   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536195   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536351   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536367   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536513   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536508   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.536634   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536815   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.644679   69534 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:33.652386   69534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:33.803821   69534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:33.810620   69534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:33.810678   69534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:33.826938   69534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:33.826963   69534 start.go:495] detecting cgroup driver to use...
	I0927 01:41:33.827028   69534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:33.844572   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:33.859851   69534 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:33.859916   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:33.874262   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:33.888460   69534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:34.011008   69534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:34.161761   69534 docker.go:233] disabling docker service ...
	I0927 01:41:34.161855   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:34.180621   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:34.198472   69534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:34.340892   69534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:34.483708   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:34.498745   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:34.518957   69534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:34.519026   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.530123   69534 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:34.530172   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.545035   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.555944   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.566852   69534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:34.577676   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.589078   69534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.608131   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.619482   69534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:34.629119   69534 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:34.629180   69534 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:34.643997   69534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:34.656396   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:34.791856   69534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:34.884774   69534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:34.884831   69534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:34.889590   69534 start.go:563] Will wait 60s for crictl version
	I0927 01:41:34.889633   69534 ssh_runner.go:195] Run: which crictl
	I0927 01:41:34.893330   69534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:34.930031   69534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:34.930141   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.960912   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.996060   69534 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:31.542525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.546389   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:32.459727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:32.959527   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.459351   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.959903   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.459444   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.959423   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.459435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.460148   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.959874   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.997457   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:35.000691   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001081   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:35.001127   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001322   69534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:35.006115   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:35.019817   69534 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:35.019983   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:35.020045   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:35.062533   69534 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:35.062595   69534 ssh_runner.go:195] Run: which lz4
	I0927 01:41:35.066897   69534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:35.071178   69534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:35.071216   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:41:36.563774   69534 crio.go:462] duration metric: took 1.496913722s to copy over tarball
	I0927 01:41:36.563866   69534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:35.100818   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.101327   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.101354   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.101290   70637 retry.go:31] will retry after 244.193758ms: waiting for machine to come up
	I0927 01:41:35.347021   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.347674   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.347714   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.347650   70637 retry.go:31] will retry after 361.672884ms: waiting for machine to come up
	I0927 01:41:35.711206   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.711755   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.711788   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.711730   70637 retry.go:31] will retry after 406.084841ms: waiting for machine to come up
	I0927 01:41:36.119494   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.120026   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.120067   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.119978   70637 retry.go:31] will retry after 497.966133ms: waiting for machine to come up
	I0927 01:41:36.619859   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.620400   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.620428   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.620362   70637 retry.go:31] will retry after 765.975603ms: waiting for machine to come up
	I0927 01:41:37.387821   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:37.388502   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:37.388537   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:37.388453   70637 retry.go:31] will retry after 828.567445ms: waiting for machine to come up
	I0927 01:41:38.218462   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:38.218940   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:38.218974   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:38.218803   70637 retry.go:31] will retry after 1.269155563s: waiting for machine to come up
	I0927 01:41:39.489076   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:39.489557   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:39.489583   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:39.489514   70637 retry.go:31] will retry after 1.666481574s: waiting for machine to come up
	I0927 01:41:35.554859   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:38.043285   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:40.542499   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:37.459766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:37.959594   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.459971   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.960093   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.459983   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.959812   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.460220   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.959253   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.459829   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.959864   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.667451   69534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10354947s)
	I0927 01:41:38.667477   69534 crio.go:469] duration metric: took 2.103669113s to extract the tarball
	I0927 01:41:38.667487   69534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:38.704217   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:38.747162   69534 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:41:38.747187   69534 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:41:38.747197   69534 kubeadm.go:934] updating node { 192.168.61.83 8444 v1.31.1 crio true true} ...
	I0927 01:41:38.747323   69534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-368295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:38.747406   69534 ssh_runner.go:195] Run: crio config
	I0927 01:41:38.796481   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:38.796510   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:38.796522   69534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:38.796549   69534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-368295 NodeName:default-k8s-diff-port-368295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:41:38.796726   69534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-368295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:38.796806   69534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:41:38.807445   69534 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:38.807513   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:38.817368   69534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0927 01:41:38.834181   69534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:38.851650   69534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0927 01:41:38.869822   69534 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:38.873868   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:38.886422   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:39.022075   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:39.038948   69534 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295 for IP: 192.168.61.83
	I0927 01:41:39.038982   69534 certs.go:194] generating shared ca certs ...
	I0927 01:41:39.039004   69534 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:39.039174   69534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:39.039241   69534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:39.039253   69534 certs.go:256] generating profile certs ...
	I0927 01:41:39.039402   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.key
	I0927 01:41:39.039490   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key.2edc0267
	I0927 01:41:39.039549   69534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key
	I0927 01:41:39.039701   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:39.039773   69534 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:39.039789   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:39.039825   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:39.039860   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:39.039889   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:39.039950   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:39.040814   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:39.080130   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:39.133365   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:39.169238   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:39.196619   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 01:41:39.227667   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:39.255240   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:39.280602   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:41:39.305695   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:39.329559   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:39.358555   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:39.387030   69534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:39.404111   69534 ssh_runner.go:195] Run: openssl version
	I0927 01:41:39.409879   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:39.420542   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425094   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425151   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.431225   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:39.442237   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:39.453229   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458040   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458110   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.464177   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:39.475582   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:39.486911   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491843   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491898   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.497653   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:39.508039   69534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:39.512597   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:39.518557   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:39.524475   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:39.530616   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:39.536820   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:39.543487   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:39.549791   69534 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:39.549880   69534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:39.549945   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.594178   69534 cri.go:89] found id: ""
	I0927 01:41:39.594256   69534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:39.605173   69534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:39.605195   69534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:39.605261   69534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:39.615543   69534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:39.616639   69534 kubeconfig.go:125] found "default-k8s-diff-port-368295" server: "https://192.168.61.83:8444"
	I0927 01:41:39.618793   69534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:39.628422   69534 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.83
	I0927 01:41:39.628454   69534 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:39.628465   69534 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:39.628566   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.673513   69534 cri.go:89] found id: ""
	I0927 01:41:39.673592   69534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:39.690296   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:39.699800   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:39.699821   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:39.699876   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:41:39.709235   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:39.709294   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:39.719012   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:41:39.728197   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:39.728262   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:39.737520   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.746592   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:39.746653   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.756251   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:41:39.765026   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:39.765090   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:39.774937   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:39.784588   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:39.893259   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.625162   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.954926   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.025693   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.101915   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:41.102006   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.602856   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.102942   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.602371   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.620056   69534 api_server.go:72] duration metric: took 1.518136259s to wait for apiserver process to appear ...
	I0927 01:41:42.620085   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:42.620107   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:41.157254   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:41.157789   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:41.157817   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:41.157738   70637 retry.go:31] will retry after 1.495421187s: waiting for machine to come up
	I0927 01:41:42.655326   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:42.655826   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:42.655853   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:42.655771   70637 retry.go:31] will retry after 2.80191937s: waiting for machine to come up
	I0927 01:41:42.543732   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.043009   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.040496   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.040525   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.040542   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.079569   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.079602   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.120702   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.126461   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.126488   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.621130   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.629533   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:45.629569   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.121189   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.130806   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:46.130842   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.620334   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.625456   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:41:46.636549   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:46.636581   69534 api_server.go:131] duration metric: took 4.016488114s to wait for apiserver health ...
	I0927 01:41:46.636591   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:46.636599   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:46.638016   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:42.459806   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.960200   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.459511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.959467   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.459352   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.960147   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.459637   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.959535   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.459585   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.959579   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.639222   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:46.651680   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:46.671366   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:46.684702   69534 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:46.684740   69534 system_pods.go:61] "coredns-7c65d6cfc9-xtgdx" [6a5f97bd-0fbb-4220-a763-bb8ca6fab439] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:46.684752   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [2dbd4866-89f2-4a0c-ab8a-671ff0237bf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:46.684761   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [62865280-e996-45a9-a872-766e09d5b91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:46.684774   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [b0d06bec-2f5a-46e4-9d2d-b2ea7cdc7968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:46.684781   69534 system_pods.go:61] "kube-proxy-xm2p8" [449495d5-a476-4abf-b6be-301b9ead92e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 01:41:46.684793   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [71dadb93-c535-4ce3-8dd7-ffd4496bf0e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:46.684801   69534 system_pods.go:61] "metrics-server-6867b74b74-n9nsg" [fefb6977-44af-41f8-8a82-1dcd76374ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:46.684811   69534 system_pods.go:61] "storage-provisioner" [78bd924c-1d70-4eb6-9e2c-0e21ebc523dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 01:41:46.684818   69534 system_pods.go:74] duration metric: took 13.431978ms to wait for pod list to return data ...
	I0927 01:41:46.684830   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:46.690309   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:46.690343   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:46.690358   69534 node_conditions.go:105] duration metric: took 5.522911ms to run NodePressure ...
	I0927 01:41:46.690379   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:46.964511   69534 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971731   69534 kubeadm.go:739] kubelet initialised
	I0927 01:41:46.971751   69534 kubeadm.go:740] duration metric: took 7.215476ms waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971760   69534 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:46.978192   69534 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:45.459706   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:45.460242   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:45.460265   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:45.460161   70637 retry.go:31] will retry after 3.051133432s: waiting for machine to come up
	I0927 01:41:48.512758   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:48.513180   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:48.513208   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:48.513118   70637 retry.go:31] will retry after 3.478053984s: waiting for machine to come up
	I0927 01:41:47.544064   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:50.042360   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.459645   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:47.959756   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.460088   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.959526   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.960102   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.460203   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.960225   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.460182   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.959343   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.985840   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:51.506449   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.484646   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:52.484672   69534 pod_ready.go:82] duration metric: took 5.506454681s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:52.484685   69534 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:51.994746   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995201   68676 main.go:141] libmachine: (no-preload-521072) Found IP for machine: 192.168.50.246
	I0927 01:41:51.995219   68676 main.go:141] libmachine: (no-preload-521072) Reserving static IP address...
	I0927 01:41:51.995230   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has current primary IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.995677   68676 main.go:141] libmachine: (no-preload-521072) Reserved static IP address: 192.168.50.246
	I0927 01:41:51.995695   68676 main.go:141] libmachine: (no-preload-521072) DBG | skip adding static IP to network mk-no-preload-521072 - found existing host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"}
	I0927 01:41:51.995713   68676 main.go:141] libmachine: (no-preload-521072) DBG | Getting to WaitForSSH function...
	I0927 01:41:51.995727   68676 main.go:141] libmachine: (no-preload-521072) Waiting for SSH to be available...
	I0927 01:41:51.998245   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998590   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.998616   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998748   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH client type: external
	I0927 01:41:51.998810   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa (-rw-------)
	I0927 01:41:51.998850   68676 main.go:141] libmachine: (no-preload-521072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:51.998866   68676 main.go:141] libmachine: (no-preload-521072) DBG | About to run SSH command:
	I0927 01:41:51.998877   68676 main.go:141] libmachine: (no-preload-521072) DBG | exit 0
	I0927 01:41:52.131754   68676 main.go:141] libmachine: (no-preload-521072) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:52.132117   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetConfigRaw
	I0927 01:41:52.132724   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.135236   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135588   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.135615   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135866   68676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/config.json ...
	I0927 01:41:52.136059   68676 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:52.136078   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:52.136300   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.138644   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139009   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.139035   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139215   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.139406   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139602   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139760   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.139931   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.140139   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.140151   68676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:52.255655   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:52.255690   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.255952   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:41:52.255968   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.256122   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.258599   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.258963   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.258994   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.259108   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.259322   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259494   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259676   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.259835   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.260008   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.260023   68676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-521072 && echo "no-preload-521072" | sudo tee /etc/hostname
	I0927 01:41:52.405255   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-521072
	
	I0927 01:41:52.405314   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.408593   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.408927   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.408973   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.409346   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.409591   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409786   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409940   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.410094   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.410331   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.410356   68676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521072/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:52.538244   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:52.538276   68676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:52.538321   68676 buildroot.go:174] setting up certificates
	I0927 01:41:52.538335   68676 provision.go:84] configureAuth start
	I0927 01:41:52.538350   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.538644   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.541913   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542334   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.542372   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542540   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.544773   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545127   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.545163   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545357   68676 provision.go:143] copyHostCerts
	I0927 01:41:52.545415   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:52.545427   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:52.545496   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:52.545614   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:52.545624   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:52.545655   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:52.545732   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:52.545742   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:52.545768   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:52.545834   68676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.no-preload-521072 san=[127.0.0.1 192.168.50.246 localhost minikube no-preload-521072]
	I0927 01:41:52.738375   68676 provision.go:177] copyRemoteCerts
	I0927 01:41:52.738434   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:52.738459   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.741146   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741439   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.741456   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741630   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.741828   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.741961   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.742086   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:52.830330   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:52.854664   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:41:52.879246   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:52.902734   68676 provision.go:87] duration metric: took 364.385528ms to configureAuth
	I0927 01:41:52.902782   68676 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:52.903017   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:52.903109   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.906143   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906495   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.906526   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906699   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.906917   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907086   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907211   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.907426   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.907625   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.907640   68676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:53.162936   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:53.162960   68676 machine.go:96] duration metric: took 1.026891152s to provisionDockerMachine
	I0927 01:41:53.162971   68676 start.go:293] postStartSetup for "no-preload-521072" (driver="kvm2")
	I0927 01:41:53.162980   68676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:53.162994   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.163325   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:53.163360   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.166007   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166478   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.166516   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166726   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.166919   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.167103   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.167253   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.254620   68676 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:53.259139   68676 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:53.259160   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:53.259236   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:53.259341   68676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:53.259465   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:53.269711   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:53.294563   68676 start.go:296] duration metric: took 131.58032ms for postStartSetup
	I0927 01:41:53.294602   68676 fix.go:56] duration metric: took 19.766156729s for fixHost
	I0927 01:41:53.294626   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.297597   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.297897   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.297928   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.298092   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.298275   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298632   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.298821   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:53.298997   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:53.299010   68676 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:53.416459   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401313.370238189
	
	I0927 01:41:53.416488   68676 fix.go:216] guest clock: 1727401313.370238189
	I0927 01:41:53.416497   68676 fix.go:229] Guest: 2024-09-27 01:41:53.370238189 +0000 UTC Remote: 2024-09-27 01:41:53.294607439 +0000 UTC m=+358.400757430 (delta=75.63075ms)
	I0927 01:41:53.416521   68676 fix.go:200] guest clock delta is within tolerance: 75.63075ms
	I0927 01:41:53.416542   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 19.888127741s
	I0927 01:41:53.416581   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.416835   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:53.419800   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420124   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.420153   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420309   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420730   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420905   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420988   68676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:53.421036   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.421126   68676 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:53.421148   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.423529   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423882   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423916   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.423937   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424023   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424308   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424365   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.424412   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424464   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.424567   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424701   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424838   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424990   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.527586   68676 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:53.533685   68676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:53.680850   68676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:53.686769   68676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:53.686831   68676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:53.702686   68676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:53.702709   68676 start.go:495] detecting cgroup driver to use...
	I0927 01:41:53.702787   68676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:53.720756   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:53.736843   68676 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:53.736920   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:53.752063   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:53.768140   68676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:53.890040   68676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:54.044033   68676 docker.go:233] disabling docker service ...
	I0927 01:41:54.044100   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:54.060061   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:54.073201   68676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:54.225559   68676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:54.367269   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:54.381517   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:54.401099   68676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:54.401164   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.412620   68676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:54.412687   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.425942   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.437451   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.449115   68676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:54.460383   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.471393   68676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.489649   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.500699   68676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:54.511012   68676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:54.511061   68676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:54.524738   68676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:54.535353   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:54.672416   68676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:54.763423   68676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:54.763506   68676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:54.768758   68676 start.go:563] Will wait 60s for crictl version
	I0927 01:41:54.768823   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:54.772980   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:54.814375   68676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:54.814460   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.844002   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.876692   68676 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:54.877765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:54.880320   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.880817   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:54.880852   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.881008   68676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:54.885225   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:54.897661   68676 kubeadm.go:883] updating cluster {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:54.897768   68676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:54.897810   68676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:52.542326   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.543472   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.459589   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:52.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.459448   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.960120   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.460016   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.959681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.959819   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.459221   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.959296   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.491390   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.997932   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.937979   68676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:54.938000   68676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:54.938055   68676 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.938103   68676 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.938124   68676 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.938101   68676 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.938180   68676 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:54.938069   68676 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939611   68676 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.939853   68676 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.939867   68676 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.939872   68676 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.939875   68676 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.939868   68676 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939932   68676 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0927 01:41:54.939954   68676 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.100149   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.104432   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.122220   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0927 01:41:55.146745   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.148808   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.159749   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.194662   68676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0927 01:41:55.194710   68676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.194764   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.218262   68676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0927 01:41:55.218302   68676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.218348   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.275530   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339428   68676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0927 01:41:55.339476   68676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.339488   68676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0927 01:41:55.339526   68676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.339554   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339558   68676 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0927 01:41:55.339569   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339573   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.339584   68676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.339619   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339625   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.339689   68676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0927 01:41:55.339733   68676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339772   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.392986   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.393033   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.403596   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.403658   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.403601   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.404180   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.528983   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.529008   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.529013   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.556122   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.556146   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.559222   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.668914   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0927 01:41:55.669041   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.671951   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.672026   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.675810   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0927 01:41:55.675854   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.675883   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.675910   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:55.687199   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0927 01:41:55.687234   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.687294   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.766777   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0927 01:41:55.766775   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0927 01:41:55.766894   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:41:55.766901   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:41:55.776811   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0927 01:41:55.776824   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0927 01:41:55.776933   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0927 01:41:55.777033   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:55.776938   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:41:56.125882   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825382   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.048325373s)
	I0927 01:41:57.825460   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0927 01:41:57.825396   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.048309349s)
	I0927 01:41:57.825483   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0927 01:41:57.825401   68676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.699485021s)
	I0927 01:41:57.825517   68676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0927 01:41:57.825520   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.138185505s)
	I0927 01:41:57.825540   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0927 01:41:57.825548   68676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825411   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.058505151s)
	I0927 01:41:57.825566   68676 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:57.825573   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0927 01:41:57.825414   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.058497946s)
	I0927 01:41:57.825584   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0927 01:41:57.825596   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:57.825613   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:59.788391   68676 ssh_runner.go:235] Completed: which crictl: (1.962775321s)
	I0927 01:41:59.788412   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.962779963s)
	I0927 01:41:59.788429   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0927 01:41:59.788457   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:59.788462   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:59.788499   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:57.043267   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.542589   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:57.459172   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:57.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.459323   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.960219   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.459916   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.959858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.460249   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.959246   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.459839   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.959224   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.490443   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.992727   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.992753   69534 pod_ready.go:82] duration metric: took 7.508057707s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.992766   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998326   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.998357   69534 pod_ready.go:82] duration metric: took 5.584215ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998372   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003176   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.003197   69534 pod_ready.go:82] duration metric: took 4.816939ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003209   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009089   69534 pod_ready.go:93] pod "kube-proxy-xm2p8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.009110   69534 pod_ready.go:82] duration metric: took 5.893939ms for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009119   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014172   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.014197   69534 pod_ready.go:82] duration metric: took 5.072107ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014209   69534 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:02.021372   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.758278   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969794291s)
	I0927 01:42:01.758369   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:01.758392   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.969869427s)
	I0927 01:42:01.758415   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0927 01:42:01.758445   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.758494   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.796910   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:03.934871   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.176354046s)
	I0927 01:42:03.934903   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0927 01:42:03.934921   68676 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.934927   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.137986898s)
	I0927 01:42:03.934972   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 01:42:03.934994   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.935050   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:03.939942   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0927 01:42:02.042617   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:04.042848   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:02.460232   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:02.959635   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.459610   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.959412   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.459857   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.959495   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.459972   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.959931   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.459460   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.959627   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.021759   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:06.521921   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.308972   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373952677s)
	I0927 01:42:07.308999   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0927 01:42:07.309024   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:07.309070   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:09.378517   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.06942074s)
	I0927 01:42:09.378550   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0927 01:42:09.378579   68676 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:09.378629   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:06.546731   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:09.044481   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.459395   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:07.959574   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.460234   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.959281   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.459240   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.959429   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.459865   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.959431   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.459459   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.020456   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:11.021689   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:10.030049   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 01:42:10.030100   68676 cache_images.go:123] Successfully loaded all cached images
	I0927 01:42:10.030106   68676 cache_images.go:92] duration metric: took 15.09209404s to LoadCachedImages
	I0927 01:42:10.030118   68676 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.31.1 crio true true} ...
	I0927 01:42:10.030211   68676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-521072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:42:10.030273   68676 ssh_runner.go:195] Run: crio config
	I0927 01:42:10.078318   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:10.078342   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:10.078351   68676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:42:10.078370   68676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521072 NodeName:no-preload-521072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:42:10.078506   68676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521072"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:42:10.078580   68676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:42:10.089137   68676 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:42:10.089212   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:42:10.098310   68676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 01:42:10.116172   68676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:42:10.134642   68676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0927 01:42:10.152442   68676 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0927 01:42:10.156477   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:42:10.169007   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:10.288382   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:10.306047   68676 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072 for IP: 192.168.50.246
	I0927 01:42:10.306077   68676 certs.go:194] generating shared ca certs ...
	I0927 01:42:10.306096   68676 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:10.306276   68676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:42:10.306331   68676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:42:10.306350   68676 certs.go:256] generating profile certs ...
	I0927 01:42:10.306453   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/client.key
	I0927 01:42:10.306553   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key.735097eb
	I0927 01:42:10.306613   68676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key
	I0927 01:42:10.306761   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:42:10.306797   68676 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:42:10.306808   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:42:10.306833   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:42:10.306854   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:42:10.306878   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:42:10.306916   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:42:10.307598   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:42:10.344570   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:42:10.386834   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:42:10.432022   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:42:10.462348   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:42:10.490015   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:42:10.518144   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:42:10.545290   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:42:10.572460   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:42:10.597526   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:42:10.622287   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:42:10.646020   68676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:42:10.662972   68676 ssh_runner.go:195] Run: openssl version
	I0927 01:42:10.668844   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:42:10.680020   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684620   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684678   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.690694   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:42:10.702115   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:42:10.713424   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717918   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717971   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.723601   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:42:10.734870   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:42:10.747370   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752016   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752072   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.757964   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:42:10.769560   68676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:42:10.774457   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:42:10.780719   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:42:10.786653   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:42:10.792671   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:42:10.798674   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:42:10.804910   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:42:10.811007   68676 kubeadm.go:392] StartCluster: {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:42:10.811114   68676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:42:10.811178   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.851017   68676 cri.go:89] found id: ""
	I0927 01:42:10.851084   68676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:42:10.864997   68676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:42:10.865016   68676 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:42:10.865062   68676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:42:10.877088   68676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:42:10.878133   68676 kubeconfig.go:125] found "no-preload-521072" server: "https://192.168.50.246:8443"
	I0927 01:42:10.880637   68676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:42:10.893554   68676 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.246
	I0927 01:42:10.893578   68676 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:42:10.893592   68676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:42:10.893629   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.935734   68676 cri.go:89] found id: ""
	I0927 01:42:10.935794   68676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:42:10.954141   68676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:42:10.965345   68676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:42:10.965363   68676 kubeadm.go:157] found existing configuration files:
	
	I0927 01:42:10.965413   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:42:10.975561   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:42:10.975628   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:42:10.985747   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:42:10.995026   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:42:10.995089   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:42:11.006650   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.016964   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:42:11.017034   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.028756   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:42:11.039002   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:42:11.039072   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:42:11.050382   68676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:42:11.060839   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:11.177447   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.481118   68676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.303633907s)
	I0927 01:42:12.481149   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.706344   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.774938   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.866467   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:42:12.866552   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.366860   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.866951   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.882411   68676 api_server.go:72] duration metric: took 1.015943274s to wait for apiserver process to appear ...
	I0927 01:42:13.882435   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:42:13.882457   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:13.882963   68676 api_server.go:269] stopped: https://192.168.50.246:8443/healthz: Get "https://192.168.50.246:8443/healthz": dial tcp 192.168.50.246:8443: connect: connection refused
	I0927 01:42:14.382489   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:11.543818   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:14.042536   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:12.459771   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:12.959727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.459428   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.959255   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.460003   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.959853   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.460237   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.959974   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.459420   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.959321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.527793   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:16.023080   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.124839   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:42:17.124867   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:42:17.124885   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.174869   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.174905   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.383128   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.389594   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.389629   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.883197   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.888706   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.888734   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.382982   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.387847   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.387877   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.882844   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.887144   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.887178   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.382711   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.388007   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.388037   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.882613   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.886781   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.886801   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:20.382907   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:20.387083   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:42:20.393697   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:42:20.393725   68676 api_server.go:131] duration metric: took 6.511280572s to wait for apiserver health ...
	I0927 01:42:20.393735   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:20.393743   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:20.395270   68676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:42:16.543525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:19.041726   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:20.396770   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:42:20.407891   68676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:42:20.427815   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:42:20.436940   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:42:20.436980   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:42:20.436989   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:42:20.436997   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:42:20.437005   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:42:20.437012   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:42:20.437020   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:42:20.437032   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:42:20.437038   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:42:20.437049   68676 system_pods.go:74] duration metric: took 9.213874ms to wait for pod list to return data ...
	I0927 01:42:20.437057   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:42:20.440323   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:42:20.440345   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:42:20.440356   68676 node_conditions.go:105] duration metric: took 3.294768ms to run NodePressure ...
	I0927 01:42:20.440372   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:20.710186   68676 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713940   68676 kubeadm.go:739] kubelet initialised
	I0927 01:42:20.713958   68676 kubeadm.go:740] duration metric: took 3.749241ms waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713965   68676 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:20.718807   68676 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.722955   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722976   68676 pod_ready.go:82] duration metric: took 4.147896ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.722984   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722991   68676 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.727569   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727596   68676 pod_ready.go:82] duration metric: took 4.598426ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.727604   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727611   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.731845   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731871   68676 pod_ready.go:82] duration metric: took 4.25326ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.731881   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731889   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.830881   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830909   68676 pod_ready.go:82] duration metric: took 99.009569ms for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.830918   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830923   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.232434   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232463   68676 pod_ready.go:82] duration metric: took 401.530413ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.232473   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232485   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.630791   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630818   68676 pod_ready.go:82] duration metric: took 398.325039ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.630829   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630836   68676 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:22.032173   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032200   68676 pod_ready.go:82] duration metric: took 401.353533ms for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:22.032208   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032215   68676 pod_ready.go:39] duration metric: took 1.318241972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:22.032233   68676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:42:22.046872   68676 ops.go:34] apiserver oom_adj: -16
	I0927 01:42:22.046898   68676 kubeadm.go:597] duration metric: took 11.181875532s to restartPrimaryControlPlane
	I0927 01:42:22.046908   68676 kubeadm.go:394] duration metric: took 11.235909243s to StartCluster
	I0927 01:42:22.046923   68676 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.046984   68676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:42:22.048611   68676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.048864   68676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:42:22.048932   68676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:42:22.049029   68676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-521072"
	I0927 01:42:22.049050   68676 addons.go:234] Setting addon storage-provisioner=true in "no-preload-521072"
	W0927 01:42:22.049060   68676 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:42:22.049066   68676 addons.go:69] Setting default-storageclass=true in profile "no-preload-521072"
	I0927 01:42:22.049088   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049092   68676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521072"
	I0927 01:42:22.049096   68676 addons.go:69] Setting metrics-server=true in profile "no-preload-521072"
	I0927 01:42:22.049117   68676 addons.go:234] Setting addon metrics-server=true in "no-preload-521072"
	I0927 01:42:22.049123   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0927 01:42:22.049134   68676 addons.go:243] addon metrics-server should already be in state true
	I0927 01:42:22.049167   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049423   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049455   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049478   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049507   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049535   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049555   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.050564   68676 out.go:177] * Verifying Kubernetes components...
	I0927 01:42:22.051717   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:22.088020   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0927 01:42:22.088454   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.088964   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.088985   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.089333   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.089793   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.089825   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.091735   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I0927 01:42:22.091853   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0927 01:42:22.092236   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092295   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092659   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092677   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.092817   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092840   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.093170   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093344   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093387   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.093922   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.093949   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.097310   68676 addons.go:234] Setting addon default-storageclass=true in "no-preload-521072"
	W0927 01:42:22.097333   68676 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:42:22.097368   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.097705   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.097747   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.110628   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0927 01:42:22.111053   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.111604   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.111629   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.112113   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.112329   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.113354   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0927 01:42:22.114009   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.114749   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.115666   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.115690   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.116105   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.116374   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.116862   68676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:42:22.118124   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.118135   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:42:22.118162   68676 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:42:22.118180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.119866   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0927 01:42:22.120317   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.120908   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.120931   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.121113   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.121319   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.121556   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.121576   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.122025   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.122051   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.122280   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.122487   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.122652   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.122781   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.126076   68676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:17.459443   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:17.959426   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.460250   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.959989   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.459981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.959969   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.459758   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.959440   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.460115   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.959238   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.521751   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:21.020226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.021393   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.127430   68676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.127446   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:42:22.127460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.130498   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131040   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.131061   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131357   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.131544   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.131670   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.131997   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.138657   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0927 01:42:22.138987   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.139420   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.139438   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.139824   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.139998   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.141454   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.141664   68676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.141673   68676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:42:22.141683   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.144221   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.144670   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.144931   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.145071   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.145208   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.244289   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:22.261345   68676 node_ready.go:35] waiting up to 6m0s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:22.365923   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:42:22.365953   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:42:22.387392   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.389353   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:42:22.389379   68676 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:42:22.406994   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.491559   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:22.491581   68676 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:42:22.586476   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:23.660676   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273241029s)
	I0927 01:42:23.660733   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660750   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660732   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.253706672s)
	I0927 01:42:23.660831   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660841   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660851   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.074315804s)
	I0927 01:42:23.661081   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661098   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661109   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661108   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661118   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661153   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661205   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661161   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661223   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661230   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661238   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661125   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661607   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661608   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661621   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661631   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661632   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661637   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661641   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661645   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661649   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661650   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661653   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661852   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661866   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661874   68676 addons.go:475] Verifying addon metrics-server=true in "no-preload-521072"
	I0927 01:42:23.661917   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.668484   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.668499   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.668711   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.668726   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.668743   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.670758   68676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0927 01:42:23.672072   68676 addons.go:510] duration metric: took 1.62313879s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0927 01:42:24.265426   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.043831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:25.546335   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.460161   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:22.959177   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.459481   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.959221   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:23.959322   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:24.004970   69333 cri.go:89] found id: ""
	I0927 01:42:24.004999   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.005010   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:24.005017   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:24.005076   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:24.041880   69333 cri.go:89] found id: ""
	I0927 01:42:24.041908   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.041919   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:24.041926   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:24.041991   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:24.082295   69333 cri.go:89] found id: ""
	I0927 01:42:24.082318   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.082325   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:24.082331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:24.082385   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:24.119663   69333 cri.go:89] found id: ""
	I0927 01:42:24.119692   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.119707   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:24.119714   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:24.119771   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:24.163893   69333 cri.go:89] found id: ""
	I0927 01:42:24.163920   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.163932   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:24.163940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:24.163999   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:24.200277   69333 cri.go:89] found id: ""
	I0927 01:42:24.200299   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.200307   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:24.200312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:24.200365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:24.235039   69333 cri.go:89] found id: ""
	I0927 01:42:24.235059   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.235066   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:24.235072   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:24.235132   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:24.275160   69333 cri.go:89] found id: ""
	I0927 01:42:24.275181   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.275188   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:24.275196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:24.275206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:24.327432   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:24.327465   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:24.341113   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:24.341139   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:24.473741   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:24.473764   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:24.473779   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:24.545888   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:24.545923   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:27.086673   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:27.100552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:27.100623   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:27.136182   69333 cri.go:89] found id: ""
	I0927 01:42:27.136207   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.136215   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:27.136221   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:27.136289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:27.173258   69333 cri.go:89] found id: ""
	I0927 01:42:27.173285   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.173296   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:27.173303   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:27.173373   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:27.210481   69333 cri.go:89] found id: ""
	I0927 01:42:27.210514   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.210526   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:27.210533   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:27.210586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:27.245168   69333 cri.go:89] found id: ""
	I0927 01:42:27.245192   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.245200   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:27.245206   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:27.245252   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:27.280494   69333 cri.go:89] found id: ""
	I0927 01:42:27.280522   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.280531   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:27.280538   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:27.280596   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:27.314281   69333 cri.go:89] found id: ""
	I0927 01:42:27.314307   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.314316   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:27.314322   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:27.314392   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:25.521413   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.019989   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:26.764721   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:27.765574   68676 node_ready.go:49] node "no-preload-521072" has status "Ready":"True"
	I0927 01:42:27.765597   68676 node_ready.go:38] duration metric: took 5.504217374s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:27.765609   68676 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:27.772263   68676 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777521   68676 pod_ready.go:93] pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.777544   68676 pod_ready.go:82] duration metric: took 5.252259ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777552   68676 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781511   68676 pod_ready.go:93] pod "etcd-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.781528   68676 pod_ready.go:82] duration metric: took 3.970559ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781535   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785556   68676 pod_ready.go:93] pod "kube-apiserver-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.785572   68676 pod_ready.go:82] duration metric: took 4.032023ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785579   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:29.792899   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.041166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:30.041766   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:27.350838   69333 cri.go:89] found id: ""
	I0927 01:42:27.350861   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.350869   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:27.350874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:27.350921   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:27.390146   69333 cri.go:89] found id: ""
	I0927 01:42:27.390175   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.390186   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:27.390196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:27.390206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:27.446727   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:27.446756   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:27.461337   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:27.461365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:27.533818   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:27.533839   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:27.533874   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:27.614325   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:27.614357   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.161303   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:30.179521   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:30.179590   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:30.221738   69333 cri.go:89] found id: ""
	I0927 01:42:30.221764   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.221772   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:30.221778   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:30.221841   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:30.258316   69333 cri.go:89] found id: ""
	I0927 01:42:30.258349   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.258359   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:30.258369   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:30.258427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:30.297079   69333 cri.go:89] found id: ""
	I0927 01:42:30.297102   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.297109   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:30.297114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:30.297159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:30.337969   69333 cri.go:89] found id: ""
	I0927 01:42:30.337995   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.338007   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:30.338014   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:30.338075   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:30.375946   69333 cri.go:89] found id: ""
	I0927 01:42:30.375975   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.375986   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:30.375993   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:30.376054   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:30.411673   69333 cri.go:89] found id: ""
	I0927 01:42:30.411700   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.411710   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:30.411718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:30.411765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:30.447784   69333 cri.go:89] found id: ""
	I0927 01:42:30.447812   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.447822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:30.447830   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:30.447890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:30.483164   69333 cri.go:89] found id: ""
	I0927 01:42:30.483191   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.483202   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:30.483213   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:30.483229   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:30.533490   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:30.533522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:30.547688   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:30.547722   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:30.626696   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:30.626720   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:30.626733   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:30.708767   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:30.708809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.020786   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.021243   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.292370   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.791420   68676 pod_ready.go:93] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.791444   68676 pod_ready.go:82] duration metric: took 5.00585892s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.791454   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796509   68676 pod_ready.go:93] pod "kube-proxy-wkcb8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.796528   68676 pod_ready.go:82] duration metric: took 5.067798ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796536   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801041   68676 pod_ready.go:93] pod "kube-scheduler-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.801066   68676 pod_ready.go:82] duration metric: took 4.523416ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801087   68676 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:34.807359   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.042216   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:34.541390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:33.250034   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:33.263733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:33.263805   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:33.298038   69333 cri.go:89] found id: ""
	I0927 01:42:33.298063   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.298071   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:33.298077   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:33.298139   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:33.338027   69333 cri.go:89] found id: ""
	I0927 01:42:33.338050   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.338058   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:33.338064   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:33.338118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:33.376470   69333 cri.go:89] found id: ""
	I0927 01:42:33.376496   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.376504   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:33.376509   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:33.376568   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:33.419831   69333 cri.go:89] found id: ""
	I0927 01:42:33.419859   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.419868   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:33.419874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:33.419929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:33.461029   69333 cri.go:89] found id: ""
	I0927 01:42:33.461057   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.461076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:33.461085   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:33.461158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:33.499968   69333 cri.go:89] found id: ""
	I0927 01:42:33.499996   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.500007   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:33.500015   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:33.500073   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:33.552601   69333 cri.go:89] found id: ""
	I0927 01:42:33.552625   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.552633   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:33.552640   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:33.552702   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:33.589491   69333 cri.go:89] found id: ""
	I0927 01:42:33.589520   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.589529   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:33.589540   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:33.589554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:33.643437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:33.643470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:33.657819   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:33.657846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:33.728369   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:33.728393   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:33.728407   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:33.803661   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:33.803691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:36.343598   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:36.357879   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:36.357937   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:36.398936   69333 cri.go:89] found id: ""
	I0927 01:42:36.398958   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.398966   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:36.398971   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:36.399016   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:36.438897   69333 cri.go:89] found id: ""
	I0927 01:42:36.438921   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.438928   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:36.438935   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:36.438979   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:36.476779   69333 cri.go:89] found id: ""
	I0927 01:42:36.476807   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.476817   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:36.476824   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:36.476882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:36.514216   69333 cri.go:89] found id: ""
	I0927 01:42:36.514238   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.514245   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:36.514251   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:36.514306   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:36.551800   69333 cri.go:89] found id: ""
	I0927 01:42:36.551827   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.551835   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:36.551841   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:36.551900   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:36.592060   69333 cri.go:89] found id: ""
	I0927 01:42:36.592086   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.592096   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:36.592101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:36.592172   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:36.633485   69333 cri.go:89] found id: ""
	I0927 01:42:36.633507   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.633514   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:36.633519   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:36.633571   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:36.667288   69333 cri.go:89] found id: ""
	I0927 01:42:36.667355   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.667366   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:36.667377   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:36.667391   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:36.722230   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:36.722263   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:36.735927   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:36.735952   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:36.808852   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:36.808872   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:36.808887   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:36.889259   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:36.889299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:34.520143   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.521254   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.808388   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.308743   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.542085   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.042119   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.438818   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:39.459082   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:39.459150   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:39.499966   69333 cri.go:89] found id: ""
	I0927 01:42:39.499991   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.499999   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:39.500004   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:39.500050   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:39.540828   69333 cri.go:89] found id: ""
	I0927 01:42:39.540850   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.540857   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:39.540864   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:39.540972   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:39.575841   69333 cri.go:89] found id: ""
	I0927 01:42:39.575868   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.575879   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:39.575886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:39.575958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:39.611105   69333 cri.go:89] found id: ""
	I0927 01:42:39.611184   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.611202   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:39.611212   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:39.611268   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:39.644772   69333 cri.go:89] found id: ""
	I0927 01:42:39.644800   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.644808   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:39.644813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:39.644868   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:39.679875   69333 cri.go:89] found id: ""
	I0927 01:42:39.679901   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.679912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:39.679919   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:39.679987   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:39.716410   69333 cri.go:89] found id: ""
	I0927 01:42:39.716440   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.716450   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:39.716457   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:39.716525   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:39.750391   69333 cri.go:89] found id: ""
	I0927 01:42:39.750418   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.750428   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:39.750439   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:39.750455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:39.822365   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:39.822401   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:39.822416   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:39.905982   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:39.906017   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:39.952310   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:39.952339   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:40.000523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:40.000554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:39.021945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.519787   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.807532   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:44.307548   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.042260   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:43.042762   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.542112   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:42.514379   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:42.528312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:42.528377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:42.562427   69333 cri.go:89] found id: ""
	I0927 01:42:42.562455   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.562463   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:42.562469   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:42.562526   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:42.599969   69333 cri.go:89] found id: ""
	I0927 01:42:42.599993   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.600002   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:42.600007   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:42.600053   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:42.636338   69333 cri.go:89] found id: ""
	I0927 01:42:42.636364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.636371   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:42.636376   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:42.636431   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:42.670781   69333 cri.go:89] found id: ""
	I0927 01:42:42.670809   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.670818   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:42.670823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:42.670880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:42.707334   69333 cri.go:89] found id: ""
	I0927 01:42:42.707364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.707375   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:42.707431   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:42.707503   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:42.743063   69333 cri.go:89] found id: ""
	I0927 01:42:42.743092   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.743103   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:42.743139   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:42.743192   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:42.778593   69333 cri.go:89] found id: ""
	I0927 01:42:42.778617   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.778628   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:42.778634   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:42.778700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:42.814261   69333 cri.go:89] found id: ""
	I0927 01:42:42.814286   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.814293   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:42.814300   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:42.814310   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:42.863982   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:42.864011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:42.877151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:42.877175   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:42.959233   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:42.959251   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:42.959262   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:43.038773   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:43.038805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:45.581272   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:45.596103   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:45.596167   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:45.639507   69333 cri.go:89] found id: ""
	I0927 01:42:45.639531   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.639539   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:45.639545   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:45.639611   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:45.678455   69333 cri.go:89] found id: ""
	I0927 01:42:45.678482   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.678489   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:45.678495   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:45.678539   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:45.722094   69333 cri.go:89] found id: ""
	I0927 01:42:45.722123   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.722135   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:45.722142   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:45.722211   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:45.758091   69333 cri.go:89] found id: ""
	I0927 01:42:45.758118   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.758127   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:45.758133   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:45.758183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:45.792976   69333 cri.go:89] found id: ""
	I0927 01:42:45.793010   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.793021   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:45.793028   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:45.793089   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:45.830235   69333 cri.go:89] found id: ""
	I0927 01:42:45.830262   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.830273   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:45.830280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:45.830324   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:45.865896   69333 cri.go:89] found id: ""
	I0927 01:42:45.865928   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.865938   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:45.865946   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:45.866000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:45.900058   69333 cri.go:89] found id: ""
	I0927 01:42:45.900088   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.900099   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:45.900108   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:45.900119   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:45.972986   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:45.973015   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:45.973030   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:46.048703   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:46.048732   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:46.087483   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:46.087515   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:46.136833   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:46.136866   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:43.520998   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.522532   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.020912   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:46.307637   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.808963   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.041757   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:50.042259   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.650738   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:48.665847   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:48.665930   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:48.704304   69333 cri.go:89] found id: ""
	I0927 01:42:48.704328   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.704337   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:48.704342   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:48.704402   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:48.742469   69333 cri.go:89] found id: ""
	I0927 01:42:48.742499   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.742510   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:48.742517   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:48.742579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:48.782154   69333 cri.go:89] found id: ""
	I0927 01:42:48.782183   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.782194   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:48.782201   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:48.782261   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:48.821686   69333 cri.go:89] found id: ""
	I0927 01:42:48.821709   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.821717   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:48.821723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:48.821781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:48.867072   69333 cri.go:89] found id: ""
	I0927 01:42:48.867099   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.867109   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:48.867123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:48.867191   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:48.908215   69333 cri.go:89] found id: ""
	I0927 01:42:48.908241   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.908249   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:48.908255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:48.908312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:48.945260   69333 cri.go:89] found id: ""
	I0927 01:42:48.945291   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.945303   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:48.945310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:48.945375   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:48.983285   69333 cri.go:89] found id: ""
	I0927 01:42:48.983325   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.983333   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:48.983343   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:48.983354   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:49.039437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:49.039472   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:49.053546   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:49.053571   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:49.129264   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:49.129286   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:49.129299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:49.216967   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:49.216999   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:51.758143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:51.771417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:51.771485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:51.806120   69333 cri.go:89] found id: ""
	I0927 01:42:51.806144   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.806154   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:51.806161   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:51.806219   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:51.840301   69333 cri.go:89] found id: ""
	I0927 01:42:51.840330   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.840340   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:51.840348   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:51.840410   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:51.874908   69333 cri.go:89] found id: ""
	I0927 01:42:51.874934   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.874944   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:51.874952   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:51.875018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:51.910960   69333 cri.go:89] found id: ""
	I0927 01:42:51.910988   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.910999   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:51.911006   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:51.911064   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:51.945206   69333 cri.go:89] found id: ""
	I0927 01:42:51.945228   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.945236   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:51.945241   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:51.945289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:51.979262   69333 cri.go:89] found id: ""
	I0927 01:42:51.979296   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.979322   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:51.979328   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:51.979384   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:52.013407   69333 cri.go:89] found id: ""
	I0927 01:42:52.013438   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.013449   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:52.013456   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:52.013510   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:52.048928   69333 cri.go:89] found id: ""
	I0927 01:42:52.048951   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.048961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:52.048970   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:52.048984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:52.101043   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:52.101083   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:52.115903   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:52.115938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:52.197147   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:52.197168   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:52.197184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:52.276352   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:52.276393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:50.021730   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.520362   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:51.306847   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:53.307714   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.042729   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.544118   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.819649   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:54.832262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:54.832344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:54.867495   69333 cri.go:89] found id: ""
	I0927 01:42:54.867523   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.867533   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:54.867539   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:54.867585   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:54.899705   69333 cri.go:89] found id: ""
	I0927 01:42:54.899732   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.899742   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:54.899749   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:54.899817   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:54.939216   69333 cri.go:89] found id: ""
	I0927 01:42:54.939235   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.939244   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:54.939249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:54.939293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:54.976603   69333 cri.go:89] found id: ""
	I0927 01:42:54.976632   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.976643   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:54.976651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:54.976718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:55.011617   69333 cri.go:89] found id: ""
	I0927 01:42:55.011649   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.011660   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:55.011667   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:55.011729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:55.048836   69333 cri.go:89] found id: ""
	I0927 01:42:55.048861   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.048869   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:55.048885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:55.048955   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:55.085105   69333 cri.go:89] found id: ""
	I0927 01:42:55.085133   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.085144   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:55.085151   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:55.085205   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:55.122536   69333 cri.go:89] found id: ""
	I0927 01:42:55.122564   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.122575   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:55.122585   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:55.122600   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:55.197191   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:55.197216   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:55.197230   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:55.275914   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:55.275950   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:55.315043   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:55.315071   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:55.365808   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:55.365846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:55.025083   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.520041   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:55.807377   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.807419   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.808202   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.042511   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.541628   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.880934   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:57.894276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:57.894337   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:57.933299   69333 cri.go:89] found id: ""
	I0927 01:42:57.933326   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.933336   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:57.933343   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:57.933403   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:57.969070   69333 cri.go:89] found id: ""
	I0927 01:42:57.969094   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.969102   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:57.969107   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:57.969151   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:58.009432   69333 cri.go:89] found id: ""
	I0927 01:42:58.009453   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.009462   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:58.009468   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:58.009524   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:58.046507   69333 cri.go:89] found id: ""
	I0927 01:42:58.046526   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.046533   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:58.046539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:58.046603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:58.079910   69333 cri.go:89] found id: ""
	I0927 01:42:58.079936   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.079947   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:58.079954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:58.080015   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:58.115971   69333 cri.go:89] found id: ""
	I0927 01:42:58.115994   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.116001   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:58.116007   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:58.116065   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:58.150512   69333 cri.go:89] found id: ""
	I0927 01:42:58.150536   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.150544   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:58.150549   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:58.150608   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:58.183458   69333 cri.go:89] found id: ""
	I0927 01:42:58.183487   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.183498   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:58.183506   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:58.183520   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:58.234404   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:58.234434   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:58.248387   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:58.248411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:58.320751   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:58.320772   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:58.320783   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:58.401163   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:58.401212   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:00.943677   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:00.956739   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:00.956815   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:00.991020   69333 cri.go:89] found id: ""
	I0927 01:43:00.991042   69333 logs.go:276] 0 containers: []
	W0927 01:43:00.991051   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:00.991056   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:00.991113   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:01.031686   69333 cri.go:89] found id: ""
	I0927 01:43:01.031711   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.031720   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:01.031726   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:01.031786   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:01.068783   69333 cri.go:89] found id: ""
	I0927 01:43:01.068813   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.068824   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:01.068831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:01.068890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:01.108434   69333 cri.go:89] found id: ""
	I0927 01:43:01.108456   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.108464   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:01.108469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:01.108513   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:01.147574   69333 cri.go:89] found id: ""
	I0927 01:43:01.147596   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.147604   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:01.147610   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:01.147660   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:01.188251   69333 cri.go:89] found id: ""
	I0927 01:43:01.188279   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.188290   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:01.188297   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:01.188359   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:01.224901   69333 cri.go:89] found id: ""
	I0927 01:43:01.224944   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.224964   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:01.224974   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:01.225052   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:01.262701   69333 cri.go:89] found id: ""
	I0927 01:43:01.262728   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.262738   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:01.262749   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:01.262762   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:01.313872   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:01.313900   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:01.327809   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:01.327835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:01.400864   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:01.400895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:01.400909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:01.478012   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:01.478045   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:59.520973   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.522457   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:02.308215   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.309111   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.543151   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.043201   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.018634   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:04.032732   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:04.032803   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:04.075258   69333 cri.go:89] found id: ""
	I0927 01:43:04.075285   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.075293   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:04.075299   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:04.075381   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:04.108738   69333 cri.go:89] found id: ""
	I0927 01:43:04.108764   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.108773   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:04.108779   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:04.108835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:04.142115   69333 cri.go:89] found id: ""
	I0927 01:43:04.142145   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.142155   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:04.142174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:04.142249   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:04.184606   69333 cri.go:89] found id: ""
	I0927 01:43:04.184626   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.184634   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:04.184639   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:04.184684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:04.218391   69333 cri.go:89] found id: ""
	I0927 01:43:04.218420   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.218428   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:04.218434   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:04.218482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.253796   69333 cri.go:89] found id: ""
	I0927 01:43:04.253816   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.253824   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:04.253829   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:04.253884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:04.289147   69333 cri.go:89] found id: ""
	I0927 01:43:04.289170   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.289179   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:04.289184   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:04.289245   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:04.329000   69333 cri.go:89] found id: ""
	I0927 01:43:04.329026   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.329034   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:04.329042   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:04.329053   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:04.424255   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:04.424290   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:04.470746   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:04.470775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:04.524208   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:04.524237   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:04.538338   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:04.538365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:04.608713   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.109492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:07.124253   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:07.124332   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:07.160443   69333 cri.go:89] found id: ""
	I0927 01:43:07.160470   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.160481   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:07.160488   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:07.160554   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:07.195492   69333 cri.go:89] found id: ""
	I0927 01:43:07.195515   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.195522   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:07.195527   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:07.195572   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:07.237678   69333 cri.go:89] found id: ""
	I0927 01:43:07.237708   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.237718   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:07.237725   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:07.237792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:07.274239   69333 cri.go:89] found id: ""
	I0927 01:43:07.274268   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.274279   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:07.274286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:07.274352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:07.315099   69333 cri.go:89] found id: ""
	I0927 01:43:07.315124   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.315131   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:07.315137   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:07.315190   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.020911   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.520371   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.807124   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.306568   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.543210   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.042166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:07.356301   69333 cri.go:89] found id: ""
	I0927 01:43:07.356328   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.356339   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:07.356347   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:07.356416   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:07.392204   69333 cri.go:89] found id: ""
	I0927 01:43:07.392232   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.392242   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:07.392255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:07.392312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:07.428924   69333 cri.go:89] found id: ""
	I0927 01:43:07.428952   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.428961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:07.428969   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:07.428981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:07.502507   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.502531   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:07.502545   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:07.584169   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:07.584201   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:07.623413   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:07.623446   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:07.675444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:07.675480   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.190164   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:10.205315   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:10.205395   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:10.244030   69333 cri.go:89] found id: ""
	I0927 01:43:10.244053   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.244063   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:10.244071   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:10.244134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:10.280081   69333 cri.go:89] found id: ""
	I0927 01:43:10.280108   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.280118   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:10.280125   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:10.280184   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:10.315428   69333 cri.go:89] found id: ""
	I0927 01:43:10.315454   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.315464   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:10.315471   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:10.315531   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:10.352536   69333 cri.go:89] found id: ""
	I0927 01:43:10.352560   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.352567   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:10.352574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:10.352634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:10.388846   69333 cri.go:89] found id: ""
	I0927 01:43:10.388870   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.388880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:10.388887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:10.388951   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:10.427746   69333 cri.go:89] found id: ""
	I0927 01:43:10.427771   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.427779   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:10.427784   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:10.427839   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:10.473126   69333 cri.go:89] found id: ""
	I0927 01:43:10.473155   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.473166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:10.473172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:10.473234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:10.511925   69333 cri.go:89] found id: ""
	I0927 01:43:10.511954   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.511962   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:10.511971   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:10.511984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:10.551428   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:10.551459   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:10.603655   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:10.603691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.617232   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:10.617266   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:10.696559   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:10.696585   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:10.696599   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:09.020784   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.521429   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.307081   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.307876   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.043819   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.543289   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.273888   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:13.288271   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:13.288349   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:13.325796   69333 cri.go:89] found id: ""
	I0927 01:43:13.325823   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.325831   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:13.325837   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:13.325893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:13.360721   69333 cri.go:89] found id: ""
	I0927 01:43:13.360748   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.360756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:13.360762   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:13.360821   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:13.399722   69333 cri.go:89] found id: ""
	I0927 01:43:13.399749   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.399756   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:13.399762   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:13.399826   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:13.437161   69333 cri.go:89] found id: ""
	I0927 01:43:13.437187   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.437194   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:13.437200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:13.437260   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:13.474735   69333 cri.go:89] found id: ""
	I0927 01:43:13.474758   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.474766   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:13.474771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:13.474822   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:13.528726   69333 cri.go:89] found id: ""
	I0927 01:43:13.528754   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.528764   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:13.528771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:13.528837   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:13.568617   69333 cri.go:89] found id: ""
	I0927 01:43:13.568642   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.568651   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:13.568658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:13.568726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:13.605820   69333 cri.go:89] found id: ""
	I0927 01:43:13.605846   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.605857   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:13.605868   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:13.605883   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:13.682586   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:13.682609   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:13.682624   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:13.764487   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:13.764522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:13.809248   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:13.809280   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:13.861331   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:13.861371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:16.376981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:16.391787   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:16.391842   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:16.432731   69333 cri.go:89] found id: ""
	I0927 01:43:16.432758   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.432767   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:16.432775   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:16.432836   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:16.466769   69333 cri.go:89] found id: ""
	I0927 01:43:16.466798   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.466806   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:16.466812   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:16.466860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:16.501899   69333 cri.go:89] found id: ""
	I0927 01:43:16.501927   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.501940   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:16.501947   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:16.502000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:16.537356   69333 cri.go:89] found id: ""
	I0927 01:43:16.537383   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.537393   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:16.537401   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:16.537460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:16.573910   69333 cri.go:89] found id: ""
	I0927 01:43:16.573937   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.573946   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:16.573951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:16.574003   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:16.617780   69333 cri.go:89] found id: ""
	I0927 01:43:16.617808   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.617818   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:16.617826   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:16.617884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:16.653262   69333 cri.go:89] found id: ""
	I0927 01:43:16.653311   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.653323   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:16.653331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:16.653394   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:16.689861   69333 cri.go:89] found id: ""
	I0927 01:43:16.689889   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.689898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:16.689909   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:16.689922   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:16.765961   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:16.765986   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:16.766001   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:16.845195   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:16.845227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:16.889159   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:16.889188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:16.945523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:16.945558   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:13.522444   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.021202   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:15.808665   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.307884   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.043071   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.541709   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:19.461132   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:19.475148   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:19.475234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:19.511487   69333 cri.go:89] found id: ""
	I0927 01:43:19.511509   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.511517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:19.511522   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:19.511580   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:19.545726   69333 cri.go:89] found id: ""
	I0927 01:43:19.545750   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.545756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:19.545763   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:19.545830   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:19.581287   69333 cri.go:89] found id: ""
	I0927 01:43:19.581310   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.581318   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:19.581323   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:19.581376   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:19.614179   69333 cri.go:89] found id: ""
	I0927 01:43:19.614205   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.614215   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:19.614223   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:19.614286   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:19.648276   69333 cri.go:89] found id: ""
	I0927 01:43:19.648307   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.648318   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:19.648330   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:19.648390   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:19.683051   69333 cri.go:89] found id: ""
	I0927 01:43:19.683083   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.683094   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:19.683114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:19.683166   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:19.716664   69333 cri.go:89] found id: ""
	I0927 01:43:19.716686   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.716694   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:19.716700   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:19.716745   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:19.758948   69333 cri.go:89] found id: ""
	I0927 01:43:19.758969   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.758976   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:19.758984   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:19.758996   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:19.797751   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:19.797777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:19.853605   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:19.853635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:19.867785   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:19.867815   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:19.950323   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:19.950350   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:19.950363   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:18.520291   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.520845   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.520886   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.808171   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.812047   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:21.043160   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:23.546462   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.555421   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:22.570013   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:22.570077   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:22.605007   69333 cri.go:89] found id: ""
	I0927 01:43:22.605034   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.605055   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:22.605062   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:22.605122   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:22.640350   69333 cri.go:89] found id: ""
	I0927 01:43:22.640381   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.640391   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:22.640406   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:22.640482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:22.677464   69333 cri.go:89] found id: ""
	I0927 01:43:22.677489   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.677499   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:22.677506   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:22.677567   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:22.721978   69333 cri.go:89] found id: ""
	I0927 01:43:22.722017   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.722025   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:22.722032   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:22.722093   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:22.757694   69333 cri.go:89] found id: ""
	I0927 01:43:22.757720   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.757729   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:22.757733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:22.757781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:22.793872   69333 cri.go:89] found id: ""
	I0927 01:43:22.793903   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.793912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:22.793920   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:22.793971   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:22.830620   69333 cri.go:89] found id: ""
	I0927 01:43:22.830652   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.830662   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:22.830669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:22.830732   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:22.867341   69333 cri.go:89] found id: ""
	I0927 01:43:22.867370   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.867381   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:22.867392   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:22.867405   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:22.939592   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:22.939630   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:22.939654   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:23.016407   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:23.016447   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:23.058490   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:23.058522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:23.109527   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:23.109560   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:25.626109   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:25.645254   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:25.645343   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:25.707951   69333 cri.go:89] found id: ""
	I0927 01:43:25.707979   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.707989   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:25.707997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:25.708057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:25.771210   69333 cri.go:89] found id: ""
	I0927 01:43:25.771234   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.771242   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:25.771248   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:25.771295   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:25.808206   69333 cri.go:89] found id: ""
	I0927 01:43:25.808235   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.808245   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:25.808252   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:25.808311   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:25.842236   69333 cri.go:89] found id: ""
	I0927 01:43:25.842265   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.842275   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:25.842283   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:25.842328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:25.879220   69333 cri.go:89] found id: ""
	I0927 01:43:25.879248   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.879256   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:25.879262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:25.879333   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:25.913491   69333 cri.go:89] found id: ""
	I0927 01:43:25.913522   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.913532   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:25.913537   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:25.913595   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:25.946867   69333 cri.go:89] found id: ""
	I0927 01:43:25.946887   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.946894   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:25.946899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:25.946943   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:25.983792   69333 cri.go:89] found id: ""
	I0927 01:43:25.983813   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.983820   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:25.983828   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:25.983838   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:26.030169   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:26.030195   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:26.083242   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:26.083276   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:26.097109   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:26.097136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:26.168675   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:26.168703   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:26.168715   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:24.521923   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.020053   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:25.308150   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.308307   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:29.308818   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:26.042436   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.541895   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:30.542444   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.750349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:28.765211   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:28.765269   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:28.804760   69333 cri.go:89] found id: ""
	I0927 01:43:28.804784   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.804792   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:28.804798   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:28.804865   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:28.842576   69333 cri.go:89] found id: ""
	I0927 01:43:28.842597   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.842604   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:28.842612   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:28.842674   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:28.877498   69333 cri.go:89] found id: ""
	I0927 01:43:28.877529   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.877541   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:28.877553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:28.877615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:28.912583   69333 cri.go:89] found id: ""
	I0927 01:43:28.912609   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.912620   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:28.912627   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:28.912689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:28.947995   69333 cri.go:89] found id: ""
	I0927 01:43:28.948019   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.948030   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:28.948037   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:28.948135   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:28.984445   69333 cri.go:89] found id: ""
	I0927 01:43:28.984470   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.984480   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:28.984488   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:28.984551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:29.020345   69333 cri.go:89] found id: ""
	I0927 01:43:29.020374   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.020385   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:29.020392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:29.020451   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:29.056204   69333 cri.go:89] found id: ""
	I0927 01:43:29.056234   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.056245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:29.056257   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:29.056270   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:29.127936   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:29.127963   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:29.127980   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:29.205933   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:29.205981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.248745   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:29.248777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:29.302316   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:29.302348   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:31.817566   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:31.831179   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:31.831253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:31.868480   69333 cri.go:89] found id: ""
	I0927 01:43:31.868507   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.868517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:31.868528   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:31.868588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:31.901656   69333 cri.go:89] found id: ""
	I0927 01:43:31.901684   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.901694   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:31.901701   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:31.901761   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:31.937101   69333 cri.go:89] found id: ""
	I0927 01:43:31.937133   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.937145   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:31.937153   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:31.937210   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:31.970724   69333 cri.go:89] found id: ""
	I0927 01:43:31.970750   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.970761   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:31.970768   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:31.970835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:32.003704   69333 cri.go:89] found id: ""
	I0927 01:43:32.003736   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.003747   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:32.003754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:32.003813   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:32.038840   69333 cri.go:89] found id: ""
	I0927 01:43:32.038869   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.038879   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:32.038886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:32.038946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:32.075506   69333 cri.go:89] found id: ""
	I0927 01:43:32.075534   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.075545   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:32.075552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:32.075603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:32.112983   69333 cri.go:89] found id: ""
	I0927 01:43:32.113009   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.113020   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:32.113031   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:32.113046   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:32.168192   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:32.168227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:32.182702   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:32.182727   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:32.255797   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:32.255824   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:32.255835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:32.336083   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:32.336115   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.022764   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.520495   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.308851   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.807870   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.041600   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:34.880981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:34.894904   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:34.894976   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:34.933459   69333 cri.go:89] found id: ""
	I0927 01:43:34.933482   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.933490   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:34.933498   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:34.933555   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:34.966893   69333 cri.go:89] found id: ""
	I0927 01:43:34.966917   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.966926   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:34.966933   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:34.966992   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:35.002878   69333 cri.go:89] found id: ""
	I0927 01:43:35.002899   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.002907   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:35.002912   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:35.002970   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:35.039871   69333 cri.go:89] found id: ""
	I0927 01:43:35.039898   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.039908   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:35.039915   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:35.039977   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:35.078229   69333 cri.go:89] found id: ""
	I0927 01:43:35.078255   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.078267   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:35.078274   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:35.078342   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:35.114369   69333 cri.go:89] found id: ""
	I0927 01:43:35.114397   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.114408   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:35.114415   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:35.114475   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:35.148072   69333 cri.go:89] found id: ""
	I0927 01:43:35.148100   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.148110   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:35.148117   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:35.148188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:35.184020   69333 cri.go:89] found id: ""
	I0927 01:43:35.184051   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.184062   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:35.184073   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:35.184086   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:35.197332   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:35.197355   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:35.273860   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:35.273889   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:35.273904   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:35.354647   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:35.354680   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:35.392622   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:35.392651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:33.521889   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:36.020067   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.021354   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.808365   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.307251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.541793   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.043418   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.943024   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:37.957265   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:37.957329   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:37.991294   69333 cri.go:89] found id: ""
	I0927 01:43:37.991348   69333 logs.go:276] 0 containers: []
	W0927 01:43:37.991362   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:37.991368   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:37.991421   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:38.026960   69333 cri.go:89] found id: ""
	I0927 01:43:38.026981   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.026990   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:38.026998   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:38.027057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:38.063540   69333 cri.go:89] found id: ""
	I0927 01:43:38.063563   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.063571   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:38.063576   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:38.063627   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:38.099554   69333 cri.go:89] found id: ""
	I0927 01:43:38.099602   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.099613   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:38.099621   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:38.099689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:38.136576   69333 cri.go:89] found id: ""
	I0927 01:43:38.136604   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.136615   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:38.136623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:38.136676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:38.170411   69333 cri.go:89] found id: ""
	I0927 01:43:38.170441   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.170452   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:38.170458   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:38.170512   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:38.211902   69333 cri.go:89] found id: ""
	I0927 01:43:38.211934   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.211945   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:38.211951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:38.212007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:38.247850   69333 cri.go:89] found id: ""
	I0927 01:43:38.247875   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.247885   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:38.247895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:38.247913   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:38.329353   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:38.329384   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:38.369114   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:38.369148   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:38.420578   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:38.420613   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:38.434019   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:38.434050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:38.517921   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.018609   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:41.032308   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:41.032370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:41.068491   69333 cri.go:89] found id: ""
	I0927 01:43:41.068518   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.068529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:41.068536   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:41.068597   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:41.106527   69333 cri.go:89] found id: ""
	I0927 01:43:41.106555   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.106565   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:41.106571   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:41.106634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:41.142846   69333 cri.go:89] found id: ""
	I0927 01:43:41.142870   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.142880   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:41.142887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:41.142949   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:41.187499   69333 cri.go:89] found id: ""
	I0927 01:43:41.187525   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.187536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:41.187544   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:41.187606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:41.226040   69333 cri.go:89] found id: ""
	I0927 01:43:41.226063   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.226070   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:41.226076   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:41.226153   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:41.261399   69333 cri.go:89] found id: ""
	I0927 01:43:41.261429   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.261440   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:41.261446   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:41.261493   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:41.300709   69333 cri.go:89] found id: ""
	I0927 01:43:41.300730   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.300737   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:41.300743   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:41.300799   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:41.335725   69333 cri.go:89] found id: ""
	I0927 01:43:41.335751   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.335759   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:41.335767   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:41.335776   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:41.387756   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:41.387788   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:41.401717   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:41.401743   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:41.479524   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.479548   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:41.479562   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:41.559926   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:41.559959   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:40.520642   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.521344   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.307769   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.807328   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.541384   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.548925   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.107615   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:44.122628   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:44.122690   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:44.163496   69333 cri.go:89] found id: ""
	I0927 01:43:44.163521   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.163529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:44.163541   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:44.163588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:44.203488   69333 cri.go:89] found id: ""
	I0927 01:43:44.203519   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.203529   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:44.203535   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:44.203600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:44.238111   69333 cri.go:89] found id: ""
	I0927 01:43:44.238141   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.238148   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:44.238154   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:44.238221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:44.272954   69333 cri.go:89] found id: ""
	I0927 01:43:44.272981   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.272991   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:44.272998   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:44.273057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:44.309700   69333 cri.go:89] found id: ""
	I0927 01:43:44.309719   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.309726   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:44.309731   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:44.309776   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:44.344532   69333 cri.go:89] found id: ""
	I0927 01:43:44.344563   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.344573   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:44.344580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:44.344641   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:44.379354   69333 cri.go:89] found id: ""
	I0927 01:43:44.379380   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.379391   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:44.379399   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:44.379461   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:44.415297   69333 cri.go:89] found id: ""
	I0927 01:43:44.415344   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.415356   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:44.415366   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:44.415381   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:44.468570   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:44.468602   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:44.483419   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:44.483453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:44.560718   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:44.560737   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:44.560753   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:44.641130   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:44.641173   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.188520   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:47.202189   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:47.202262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:47.243051   69333 cri.go:89] found id: ""
	I0927 01:43:47.243075   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.243083   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:47.243089   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:47.243155   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:47.280071   69333 cri.go:89] found id: ""
	I0927 01:43:47.280094   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.280104   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:47.280111   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:47.280170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:47.318458   69333 cri.go:89] found id: ""
	I0927 01:43:47.318482   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.318492   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:47.318499   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:47.318551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:45.023799   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.522945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:45.307910   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.309781   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.807329   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.041371   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.042307   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.352891   69333 cri.go:89] found id: ""
	I0927 01:43:47.352916   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.352926   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:47.352933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:47.352997   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:47.387534   69333 cri.go:89] found id: ""
	I0927 01:43:47.387560   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.387569   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:47.387578   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:47.387646   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:47.422221   69333 cri.go:89] found id: ""
	I0927 01:43:47.422254   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.422265   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:47.422273   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:47.422330   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:47.459624   69333 cri.go:89] found id: ""
	I0927 01:43:47.459645   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.459653   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:47.459659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:47.459706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:47.494322   69333 cri.go:89] found id: ""
	I0927 01:43:47.494347   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.494355   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:47.494363   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:47.494375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:47.508031   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:47.508056   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:47.583920   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:47.583952   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:47.583968   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:47.665533   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:47.665568   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.708423   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:47.708455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.261602   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:50.275548   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:50.275607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:50.311583   69333 cri.go:89] found id: ""
	I0927 01:43:50.311610   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.311620   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:50.311627   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:50.311687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:50.347686   69333 cri.go:89] found id: ""
	I0927 01:43:50.347709   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.347721   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:50.347729   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:50.347778   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:50.386627   69333 cri.go:89] found id: ""
	I0927 01:43:50.386654   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.386663   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:50.386669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:50.386719   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:50.421512   69333 cri.go:89] found id: ""
	I0927 01:43:50.421538   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.421547   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:50.421552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:50.421603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:50.461849   69333 cri.go:89] found id: ""
	I0927 01:43:50.461872   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.461880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:50.461885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:50.461941   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:50.496517   69333 cri.go:89] found id: ""
	I0927 01:43:50.496540   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.496548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:50.496554   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:50.496600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:50.532595   69333 cri.go:89] found id: ""
	I0927 01:43:50.532619   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.532630   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:50.532638   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:50.532687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:50.573213   69333 cri.go:89] found id: ""
	I0927 01:43:50.573241   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.573252   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:50.573262   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:50.573275   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.625600   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:50.625633   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:50.639512   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:50.639535   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:50.708393   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:50.708415   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:50.708436   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:50.789812   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:50.789845   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:50.020837   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:52.021314   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.807713   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:54.308918   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.541348   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.542994   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.335858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:53.349369   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:53.349441   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:53.386922   69333 cri.go:89] found id: ""
	I0927 01:43:53.386947   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.386955   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:53.386961   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:53.387007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:53.423614   69333 cri.go:89] found id: ""
	I0927 01:43:53.423640   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.423651   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:53.423658   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:53.423721   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:53.463245   69333 cri.go:89] found id: ""
	I0927 01:43:53.463265   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.463273   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:53.463280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:53.463344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:53.502093   69333 cri.go:89] found id: ""
	I0927 01:43:53.502123   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.502133   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:53.502140   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:53.502196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:53.538616   69333 cri.go:89] found id: ""
	I0927 01:43:53.538641   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.538652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:53.538659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:53.538716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:53.578580   69333 cri.go:89] found id: ""
	I0927 01:43:53.578609   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.578617   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:53.578623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:53.578685   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:53.615240   69333 cri.go:89] found id: ""
	I0927 01:43:53.615266   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.615275   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:53.615282   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:53.615356   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:53.650987   69333 cri.go:89] found id: ""
	I0927 01:43:53.651011   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.651019   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:53.651028   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:53.651038   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:53.664817   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:53.664841   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:53.737875   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:53.737894   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:53.737909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:53.827293   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:53.827345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:53.867157   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:53.867188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:56.423435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:56.437837   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:56.437912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:56.480328   69333 cri.go:89] found id: ""
	I0927 01:43:56.480349   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.480357   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:56.480364   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:56.480427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:56.520627   69333 cri.go:89] found id: ""
	I0927 01:43:56.520651   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.520660   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:56.520667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:56.520726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:56.561527   69333 cri.go:89] found id: ""
	I0927 01:43:56.561555   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.561567   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:56.561574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:56.561634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:56.598751   69333 cri.go:89] found id: ""
	I0927 01:43:56.598783   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.598794   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:56.598801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:56.598861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:56.634378   69333 cri.go:89] found id: ""
	I0927 01:43:56.634410   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.634422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:56.634429   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:56.634489   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:56.669819   69333 cri.go:89] found id: ""
	I0927 01:43:56.669852   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.669863   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:56.669877   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:56.669929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:56.703715   69333 cri.go:89] found id: ""
	I0927 01:43:56.703740   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.703750   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:56.703757   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:56.703820   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:56.737208   69333 cri.go:89] found id: ""
	I0927 01:43:56.737234   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.737245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:56.737255   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:56.737269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:56.749933   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:56.749960   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:56.822331   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:56.822353   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:56.822369   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:56.904415   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:56.904454   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:56.947108   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:56.947136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:54.521004   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.807935   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.808046   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.041831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.042496   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:00.542924   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:59.500580   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:59.523807   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:59.523888   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:59.562931   69333 cri.go:89] found id: ""
	I0927 01:43:59.562955   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.562963   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:59.562968   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:59.563013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:59.599321   69333 cri.go:89] found id: ""
	I0927 01:43:59.599348   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.599358   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:59.599363   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:59.599418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:59.634404   69333 cri.go:89] found id: ""
	I0927 01:43:59.634431   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.634441   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:59.634448   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:59.634498   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:59.672022   69333 cri.go:89] found id: ""
	I0927 01:43:59.672052   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.672066   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:59.672074   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:59.672134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:59.704617   69333 cri.go:89] found id: ""
	I0927 01:43:59.704647   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.704657   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:59.704664   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:59.704712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:59.740479   69333 cri.go:89] found id: ""
	I0927 01:43:59.740504   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.740512   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:59.740517   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:59.740579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:59.777123   69333 cri.go:89] found id: ""
	I0927 01:43:59.777155   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.777166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:59.777174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:59.777234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:59.817780   69333 cri.go:89] found id: ""
	I0927 01:43:59.817803   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.817825   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:59.817841   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:59.817856   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:59.831252   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:59.831278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:59.901912   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:59.901936   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:59.901949   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:59.983001   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:59.983034   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:00.030989   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:00.031020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:59.020139   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.020925   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.306853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.308075   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.042494   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.043814   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:02.583949   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:02.596723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:02.596798   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:02.630927   69333 cri.go:89] found id: ""
	I0927 01:44:02.630953   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.630962   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:02.630967   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:02.631012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:02.664156   69333 cri.go:89] found id: ""
	I0927 01:44:02.664186   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.664198   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:02.664205   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:02.664259   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:02.698823   69333 cri.go:89] found id: ""
	I0927 01:44:02.698847   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.698860   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:02.698865   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:02.698913   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:02.736114   69333 cri.go:89] found id: ""
	I0927 01:44:02.736142   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.736154   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:02.736161   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:02.736221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:02.769739   69333 cri.go:89] found id: ""
	I0927 01:44:02.769763   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.769771   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:02.769785   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:02.769844   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:02.804798   69333 cri.go:89] found id: ""
	I0927 01:44:02.804871   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.804887   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:02.804898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:02.804958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:02.841197   69333 cri.go:89] found id: ""
	I0927 01:44:02.841224   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.841236   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:02.841243   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:02.841301   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:02.881278   69333 cri.go:89] found id: ""
	I0927 01:44:02.881310   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.881321   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:02.881331   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:02.881345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:02.935149   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:02.935183   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:02.950245   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:02.950273   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:03.020241   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:03.020263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:03.020277   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:03.104467   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:03.104503   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:05.643070   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:05.656656   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:05.656716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:05.694022   69333 cri.go:89] found id: ""
	I0927 01:44:05.694045   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.694053   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:05.694059   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:05.694123   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:05.728575   69333 cri.go:89] found id: ""
	I0927 01:44:05.728600   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.728607   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:05.728613   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:05.728667   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:05.768546   69333 cri.go:89] found id: ""
	I0927 01:44:05.768572   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.768583   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:05.768590   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:05.768652   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:05.809504   69333 cri.go:89] found id: ""
	I0927 01:44:05.809527   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.809536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:05.809543   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:05.809600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:05.846387   69333 cri.go:89] found id: ""
	I0927 01:44:05.846415   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.846422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:05.846428   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:05.846479   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:05.879579   69333 cri.go:89] found id: ""
	I0927 01:44:05.879608   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.879619   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:05.879626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:05.879684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:05.928932   69333 cri.go:89] found id: ""
	I0927 01:44:05.928961   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.928970   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:05.928978   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:05.929037   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:05.986463   69333 cri.go:89] found id: ""
	I0927 01:44:05.986490   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.986499   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:05.986507   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:05.986521   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:06.039984   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:06.040011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:06.053025   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:06.053051   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:06.127277   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:06.127316   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:06.127330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:06.201473   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:06.201504   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:03.520539   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:06.021584   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.808474   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.307407   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:07.542959   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.042223   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.739339   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:08.753354   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:08.753418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:08.788513   69333 cri.go:89] found id: ""
	I0927 01:44:08.788544   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.788556   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:08.788563   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:08.788648   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:08.824615   69333 cri.go:89] found id: ""
	I0927 01:44:08.824642   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.824653   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:08.824661   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:08.824724   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:08.858327   69333 cri.go:89] found id: ""
	I0927 01:44:08.858354   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.858365   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:08.858372   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:08.858430   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:08.896140   69333 cri.go:89] found id: ""
	I0927 01:44:08.896168   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.896175   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:08.896181   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:08.896229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:08.931525   69333 cri.go:89] found id: ""
	I0927 01:44:08.931547   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.931554   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:08.931560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:08.931618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:08.970224   69333 cri.go:89] found id: ""
	I0927 01:44:08.970246   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.970256   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:08.970263   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:08.970331   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:09.007213   69333 cri.go:89] found id: ""
	I0927 01:44:09.007240   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.007248   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:09.007255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:09.007334   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:09.043078   69333 cri.go:89] found id: ""
	I0927 01:44:09.043111   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.043122   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:09.043132   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:09.043147   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:09.096768   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:09.096801   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:09.110721   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:09.110747   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:09.182966   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:09.182990   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:09.183004   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:09.259497   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:09.259541   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:11.797307   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:11.812141   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:11.812196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:11.846429   69333 cri.go:89] found id: ""
	I0927 01:44:11.846468   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.846482   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:11.846489   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:11.846598   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:11.885294   69333 cri.go:89] found id: ""
	I0927 01:44:11.885322   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.885333   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:11.885339   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:11.885398   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:11.920856   69333 cri.go:89] found id: ""
	I0927 01:44:11.920884   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.920892   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:11.920898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:11.920946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:11.964540   69333 cri.go:89] found id: ""
	I0927 01:44:11.964564   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.964574   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:11.964581   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:11.964634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:12.000596   69333 cri.go:89] found id: ""
	I0927 01:44:12.000619   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.000629   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:12.000636   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:12.000697   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:12.037773   69333 cri.go:89] found id: ""
	I0927 01:44:12.037808   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.037819   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:12.037831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:12.037893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:12.074646   69333 cri.go:89] found id: ""
	I0927 01:44:12.074676   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.074687   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:12.074692   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:12.074740   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:12.111771   69333 cri.go:89] found id: ""
	I0927 01:44:12.111802   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.111813   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:12.111824   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:12.111837   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:12.160938   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:12.160971   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:12.175576   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:12.175605   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:12.245227   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:12.245263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:12.245278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:12.325161   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:12.325194   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:08.520111   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.520326   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.520755   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.808039   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.808843   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.042905   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.542272   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.867795   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:14.881053   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:14.881130   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:14.915193   69333 cri.go:89] found id: ""
	I0927 01:44:14.915224   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.915234   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:14.915241   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:14.915318   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:14.951758   69333 cri.go:89] found id: ""
	I0927 01:44:14.951789   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.951801   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:14.951808   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:14.951860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:14.987875   69333 cri.go:89] found id: ""
	I0927 01:44:14.987906   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.987917   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:14.987924   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:14.987985   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:15.025780   69333 cri.go:89] found id: ""
	I0927 01:44:15.025810   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.025820   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:15.025828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:15.025884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:15.062135   69333 cri.go:89] found id: ""
	I0927 01:44:15.062157   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.062165   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:15.062172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:15.062225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:15.097090   69333 cri.go:89] found id: ""
	I0927 01:44:15.097112   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.097119   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:15.097126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:15.097170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:15.130528   69333 cri.go:89] found id: ""
	I0927 01:44:15.130552   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.130561   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:15.130569   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:15.130615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:15.165422   69333 cri.go:89] found id: ""
	I0927 01:44:15.165450   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.165457   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:15.165465   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:15.165474   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:15.214612   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:15.214651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:15.230294   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:15.230318   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:15.303339   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:15.303362   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:15.303375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:15.382046   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:15.382081   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:14.520833   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.021034   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:15.308397   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.808221   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:16.542334   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:18.543785   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.923331   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:17.937693   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:17.937765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:17.972677   69333 cri.go:89] found id: ""
	I0927 01:44:17.972699   69333 logs.go:276] 0 containers: []
	W0927 01:44:17.972707   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:17.972714   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:17.972764   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:18.004818   69333 cri.go:89] found id: ""
	I0927 01:44:18.004846   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.004854   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:18.004860   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:18.004907   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:18.044693   69333 cri.go:89] found id: ""
	I0927 01:44:18.044716   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.044723   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:18.044728   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:18.044772   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:18.079205   69333 cri.go:89] found id: ""
	I0927 01:44:18.079235   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.079244   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:18.079249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:18.079299   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:18.115272   69333 cri.go:89] found id: ""
	I0927 01:44:18.115322   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.115335   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:18.115343   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:18.115412   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:18.150165   69333 cri.go:89] found id: ""
	I0927 01:44:18.150195   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.150206   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:18.150213   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:18.150275   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:18.184971   69333 cri.go:89] found id: ""
	I0927 01:44:18.184999   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.185009   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:18.185016   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:18.185083   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:18.219955   69333 cri.go:89] found id: ""
	I0927 01:44:18.219985   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.219997   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:18.220008   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:18.220020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:18.269713   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:18.269748   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:18.285224   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:18.285251   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:18.364887   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:18.364912   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:18.364927   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:18.450667   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:18.450706   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:20.991648   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:21.006472   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:21.006529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:21.043455   69333 cri.go:89] found id: ""
	I0927 01:44:21.043476   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.043486   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:21.043493   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:21.043549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:21.080365   69333 cri.go:89] found id: ""
	I0927 01:44:21.080391   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.080399   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:21.080405   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:21.080449   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:21.117576   69333 cri.go:89] found id: ""
	I0927 01:44:21.117624   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.117636   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:21.117642   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:21.117703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:21.154538   69333 cri.go:89] found id: ""
	I0927 01:44:21.154564   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.154576   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:21.154584   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:21.154638   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:21.190046   69333 cri.go:89] found id: ""
	I0927 01:44:21.190070   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.190080   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:21.190086   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:21.190147   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:21.226383   69333 cri.go:89] found id: ""
	I0927 01:44:21.226407   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.226417   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:21.226424   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:21.226485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:21.262090   69333 cri.go:89] found id: ""
	I0927 01:44:21.262113   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.262124   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:21.262132   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:21.262188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:21.297675   69333 cri.go:89] found id: ""
	I0927 01:44:21.297697   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.297706   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:21.297716   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:21.297728   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:21.349668   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:21.349705   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:21.364608   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:21.364635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:21.432570   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:21.432596   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:21.432612   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:21.507616   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:21.507661   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:19.520792   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.521341   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:20.307600   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:22.308557   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.807578   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.041736   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:23.041809   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:25.540974   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.054212   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:24.067954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:24.068014   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:24.107017   69333 cri.go:89] found id: ""
	I0927 01:44:24.107045   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.107056   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:24.107063   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:24.107124   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:24.144373   69333 cri.go:89] found id: ""
	I0927 01:44:24.144398   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.144406   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:24.144411   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:24.144473   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:24.180010   69333 cri.go:89] found id: ""
	I0927 01:44:24.180038   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.180048   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:24.180056   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:24.180118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:24.214387   69333 cri.go:89] found id: ""
	I0927 01:44:24.214413   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.214421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:24.214426   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:24.214472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:24.252597   69333 cri.go:89] found id: ""
	I0927 01:44:24.252623   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.252631   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:24.252643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:24.252705   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.292044   69333 cri.go:89] found id: ""
	I0927 01:44:24.292072   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.292082   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:24.292089   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:24.292158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:24.329899   69333 cri.go:89] found id: ""
	I0927 01:44:24.329924   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.329934   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:24.329940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:24.329998   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:24.367964   69333 cri.go:89] found id: ""
	I0927 01:44:24.367989   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.368000   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:24.368010   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:24.368025   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:24.384151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:24.384184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:24.456916   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:24.456940   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:24.456958   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:24.539362   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:24.539399   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:24.578384   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:24.578411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.132700   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:27.146218   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.146294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.180958   69333 cri.go:89] found id: ""
	I0927 01:44:27.180984   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.180992   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:27.180997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.181043   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.215213   69333 cri.go:89] found id: ""
	I0927 01:44:27.215236   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.215243   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:27.215249   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.215293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.258192   69333 cri.go:89] found id: ""
	I0927 01:44:27.258216   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.258226   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:27.258233   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.258289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.292717   69333 cri.go:89] found id: ""
	I0927 01:44:27.292742   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.292753   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:27.292760   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.292818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.328038   69333 cri.go:89] found id: ""
	I0927 01:44:27.328066   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.328076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:27.328083   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.328152   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.021885   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:26.520726   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.308923   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:29.807825   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.542683   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.042293   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.363513   69333 cri.go:89] found id: ""
	I0927 01:44:27.363539   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.363548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:27.363553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.363610   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.402201   69333 cri.go:89] found id: ""
	I0927 01:44:27.402223   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.402231   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:27.402237   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.402290   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.436952   69333 cri.go:89] found id: ""
	I0927 01:44:27.436979   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.436987   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:27.436995   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.437009   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.487908   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:27.487938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:27.502170   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:27.502199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:27.583909   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:27.583931   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:27.583943   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:27.660248   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:27.660286   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:30.201211   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:30.214276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:30.214350   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:30.252445   69333 cri.go:89] found id: ""
	I0927 01:44:30.252474   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.252484   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:30.252490   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:30.252538   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:30.287574   69333 cri.go:89] found id: ""
	I0927 01:44:30.287603   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.287614   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:30.287621   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:30.287693   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:30.324674   69333 cri.go:89] found id: ""
	I0927 01:44:30.324699   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.324711   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:30.324718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:30.324779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:30.360493   69333 cri.go:89] found id: ""
	I0927 01:44:30.360521   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.360531   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:30.360539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:30.360640   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:30.396219   69333 cri.go:89] found id: ""
	I0927 01:44:30.396252   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.396263   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:30.396270   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:30.396328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:30.431524   69333 cri.go:89] found id: ""
	I0927 01:44:30.431546   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.431558   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:30.431564   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:30.431607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:30.465887   69333 cri.go:89] found id: ""
	I0927 01:44:30.465915   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.465926   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:30.465933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:30.466000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:30.501364   69333 cri.go:89] found id: ""
	I0927 01:44:30.501391   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.501402   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:30.501411   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:30.501425   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:30.556344   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:30.556377   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:30.572619   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:30.572649   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:30.645996   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:30.646020   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:30.646032   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:30.737458   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:30.737531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:28.521312   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.521421   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.020699   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:31.807949   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.809414   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:32.045244   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:34.542035   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.284306   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:33.298164   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:33.298224   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:33.334599   69333 cri.go:89] found id: ""
	I0927 01:44:33.334625   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.334634   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:33.334654   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:33.334718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:33.369006   69333 cri.go:89] found id: ""
	I0927 01:44:33.369034   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.369044   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:33.369051   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:33.369119   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:33.407875   69333 cri.go:89] found id: ""
	I0927 01:44:33.407904   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.407912   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:33.407918   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:33.407974   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:33.441048   69333 cri.go:89] found id: ""
	I0927 01:44:33.441083   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.441094   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:33.441101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:33.441156   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:33.478458   69333 cri.go:89] found id: ""
	I0927 01:44:33.478503   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.478515   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:33.478522   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:33.478586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:33.513756   69333 cri.go:89] found id: ""
	I0927 01:44:33.513784   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.513795   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:33.513802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:33.513862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:33.554351   69333 cri.go:89] found id: ""
	I0927 01:44:33.554392   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.554403   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:33.554410   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:33.554472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:33.588484   69333 cri.go:89] found id: ""
	I0927 01:44:33.588512   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.588533   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:33.588544   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:33.588559   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:33.665735   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:33.665775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:33.704654   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:33.704687   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:33.755444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:33.755475   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:33.770069   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:33.770095   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:33.841531   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.341963   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:36.355219   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:36.355294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:36.395149   69333 cri.go:89] found id: ""
	I0927 01:44:36.395185   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.395196   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:36.395203   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:36.395262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:36.434620   69333 cri.go:89] found id: ""
	I0927 01:44:36.434649   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.434661   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:36.434667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:36.434729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:36.468328   69333 cri.go:89] found id: ""
	I0927 01:44:36.468349   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.468357   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:36.468362   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:36.468427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:36.506386   69333 cri.go:89] found id: ""
	I0927 01:44:36.506413   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.506421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:36.506427   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:36.506482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:36.546583   69333 cri.go:89] found id: ""
	I0927 01:44:36.546607   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.546614   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:36.546620   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:36.546665   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:36.581694   69333 cri.go:89] found id: ""
	I0927 01:44:36.581721   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.581730   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:36.581737   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:36.581782   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:36.617775   69333 cri.go:89] found id: ""
	I0927 01:44:36.617799   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.617807   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:36.617813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:36.617877   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:36.654443   69333 cri.go:89] found id: ""
	I0927 01:44:36.654470   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.654478   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:36.654486   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:36.654496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:36.705787   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:36.705817   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:36.720643   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:36.720677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:36.800037   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.800061   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:36.800091   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:36.886845   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:36.886884   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:35.023634   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.520794   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:36.307516   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:38.307899   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.041620   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.044257   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.429349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:39.442899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:39.442973   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:39.481752   69333 cri.go:89] found id: ""
	I0927 01:44:39.481782   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.481793   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:39.481799   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:39.481858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:39.516074   69333 cri.go:89] found id: ""
	I0927 01:44:39.516103   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.516114   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:39.516130   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:39.516188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.563351   69333 cri.go:89] found id: ""
	I0927 01:44:39.563375   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.563386   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:39.563392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.563455   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.601417   69333 cri.go:89] found id: ""
	I0927 01:44:39.601445   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.601455   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:39.601469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.601529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.634537   69333 cri.go:89] found id: ""
	I0927 01:44:39.634565   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.634576   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:39.634582   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.634642   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.668910   69333 cri.go:89] found id: ""
	I0927 01:44:39.668937   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.668948   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:39.668955   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.669013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.701992   69333 cri.go:89] found id: ""
	I0927 01:44:39.702014   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.702021   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:39.702027   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.702074   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.741579   69333 cri.go:89] found id: ""
	I0927 01:44:39.741601   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.741610   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:39.741618   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:39.741627   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:39.806476   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.806510   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.820228   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:39.820255   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:39.893137   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:39.893167   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.893181   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.974477   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.974514   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:40.021226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.521217   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:40.309154   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.808724   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:41.542308   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:44.042015   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.517449   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:42.532200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:42.532266   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:42.568872   69333 cri.go:89] found id: ""
	I0927 01:44:42.568901   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.568911   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:42.568919   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:42.568980   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:42.605069   69333 cri.go:89] found id: ""
	I0927 01:44:42.605220   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.605251   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:42.605261   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:42.605335   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:42.641637   69333 cri.go:89] found id: ""
	I0927 01:44:42.641665   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.641673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:42.641680   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:42.641742   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:42.677333   69333 cri.go:89] found id: ""
	I0927 01:44:42.677361   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.677376   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:42.677382   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:42.677439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:42.712456   69333 cri.go:89] found id: ""
	I0927 01:44:42.712484   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.712495   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:42.712501   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:42.712565   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:42.745109   69333 cri.go:89] found id: ""
	I0927 01:44:42.745140   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.745150   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:42.745157   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:42.745226   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:42.779427   69333 cri.go:89] found id: ""
	I0927 01:44:42.779449   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.779457   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:42.779462   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:42.779508   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:42.823920   69333 cri.go:89] found id: ""
	I0927 01:44:42.823946   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.823954   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:42.823963   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:42.823972   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:42.881345   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:42.881380   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:42.896076   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:42.896100   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:42.971775   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:42.971796   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:42.971809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:43.054461   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:43.054494   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.596681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:45.610817   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:45.610882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:45.647628   69333 cri.go:89] found id: ""
	I0927 01:44:45.647654   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.647662   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:45.647668   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:45.647715   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:45.685480   69333 cri.go:89] found id: ""
	I0927 01:44:45.685507   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.685514   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:45.685520   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:45.685573   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:45.721601   69333 cri.go:89] found id: ""
	I0927 01:44:45.721624   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.721632   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:45.721637   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:45.721700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:45.756763   69333 cri.go:89] found id: ""
	I0927 01:44:45.756788   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.756796   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:45.756802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:45.756858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:45.792891   69333 cri.go:89] found id: ""
	I0927 01:44:45.792917   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.792927   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:45.792934   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:45.792996   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:45.828716   69333 cri.go:89] found id: ""
	I0927 01:44:45.828739   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.828747   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:45.828753   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:45.828807   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:45.868813   69333 cri.go:89] found id: ""
	I0927 01:44:45.868840   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.868848   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:45.868853   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:45.868905   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:45.907281   69333 cri.go:89] found id: ""
	I0927 01:44:45.907327   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.907341   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:45.907352   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:45.907371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:45.958539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:45.958574   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:45.972540   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:45.972567   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:46.046083   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:46.046124   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:46.046141   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:46.124313   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:46.124349   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.021100   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.021435   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:45.307916   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.807187   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:49.809212   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:46.042143   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.541984   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:50.542678   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.673701   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:48.687673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:48.687744   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:48.722269   69333 cri.go:89] found id: ""
	I0927 01:44:48.722291   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.722302   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:48.722308   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:48.722370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:48.758297   69333 cri.go:89] found id: ""
	I0927 01:44:48.758318   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.758326   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:48.758331   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:48.758377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:48.792706   69333 cri.go:89] found id: ""
	I0927 01:44:48.792730   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.792738   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:48.792744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:48.792792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:48.827015   69333 cri.go:89] found id: ""
	I0927 01:44:48.827035   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.827047   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:48.827052   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:48.827095   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:48.862538   69333 cri.go:89] found id: ""
	I0927 01:44:48.862564   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.862572   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:48.862577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:48.862632   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:48.896118   69333 cri.go:89] found id: ""
	I0927 01:44:48.896144   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.896154   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:48.896166   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:48.896225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:48.932483   69333 cri.go:89] found id: ""
	I0927 01:44:48.932511   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.932519   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:48.932524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:48.932576   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:48.971864   69333 cri.go:89] found id: ""
	I0927 01:44:48.971890   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.971898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:48.971906   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:48.971919   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:49.028163   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:49.028199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:49.042780   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:49.042805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:49.116454   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:49.116476   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:49.116491   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.196048   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:49.196084   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:51.735108   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:51.749191   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:51.749258   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:51.784776   69333 cri.go:89] found id: ""
	I0927 01:44:51.784804   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.784815   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:51.784823   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:51.784880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:51.822807   69333 cri.go:89] found id: ""
	I0927 01:44:51.822836   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.822847   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:51.822854   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:51.822912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:51.858700   69333 cri.go:89] found id: ""
	I0927 01:44:51.858726   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.858737   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:51.858744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:51.858812   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:51.894945   69333 cri.go:89] found id: ""
	I0927 01:44:51.894968   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.894975   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:51.894980   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:51.895025   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:51.939475   69333 cri.go:89] found id: ""
	I0927 01:44:51.939503   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.939518   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:51.939524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:51.939569   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:51.982626   69333 cri.go:89] found id: ""
	I0927 01:44:51.982654   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.982665   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:51.982673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:51.982731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:52.050446   69333 cri.go:89] found id: ""
	I0927 01:44:52.050473   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.050483   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:52.050490   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:52.050549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:52.092637   69333 cri.go:89] found id: ""
	I0927 01:44:52.092666   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.092676   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:52.092686   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:52.092700   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:52.132135   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:52.132165   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:52.186537   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:52.186572   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:52.200001   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:52.200027   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:52.282068   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:52.282093   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:52.282108   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.021229   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.308560   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.309001   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:53.042624   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:55.043212   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.866565   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:54.880400   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:54.880460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:54.918963   69333 cri.go:89] found id: ""
	I0927 01:44:54.919004   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.919027   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:54.919036   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:54.919107   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:54.959918   69333 cri.go:89] found id: ""
	I0927 01:44:54.959947   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.959958   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:54.959965   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:54.960026   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:55.004348   69333 cri.go:89] found id: ""
	I0927 01:44:55.004370   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.004378   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:55.004392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:55.004446   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:55.045190   69333 cri.go:89] found id: ""
	I0927 01:44:55.045213   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.045220   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:55.045225   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:55.045278   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:55.087638   69333 cri.go:89] found id: ""
	I0927 01:44:55.087663   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.087671   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:55.087677   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:55.087739   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:55.126899   69333 cri.go:89] found id: ""
	I0927 01:44:55.126932   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.126943   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:55.126951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:55.127012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:55.167593   69333 cri.go:89] found id: ""
	I0927 01:44:55.167624   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.167635   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:55.167643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:55.167706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:55.208362   69333 cri.go:89] found id: ""
	I0927 01:44:55.208388   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.208399   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:55.208409   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:55.208424   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:55.247198   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:55.247221   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:55.299408   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:55.299443   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:55.315745   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:55.315775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:55.387499   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:55.387523   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:55.387539   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:54.021502   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.520627   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.807487   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:58.807902   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.541517   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:59.542233   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.968863   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:57.987921   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:57.987988   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:58.036770   69333 cri.go:89] found id: ""
	I0927 01:44:58.036802   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.036813   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:58.036824   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:58.036878   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:58.072461   69333 cri.go:89] found id: ""
	I0927 01:44:58.072484   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.072492   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:58.072499   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:58.072551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:58.107247   69333 cri.go:89] found id: ""
	I0927 01:44:58.107273   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.107284   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:58.107290   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:58.107365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:58.149050   69333 cri.go:89] found id: ""
	I0927 01:44:58.149080   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.149091   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:58.149099   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:58.149162   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:58.188167   69333 cri.go:89] found id: ""
	I0927 01:44:58.188198   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.188209   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:58.188217   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:58.188283   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:58.224291   69333 cri.go:89] found id: ""
	I0927 01:44:58.224319   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.224329   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:58.224337   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:58.224401   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:58.258786   69333 cri.go:89] found id: ""
	I0927 01:44:58.258813   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.258822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:58.258828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:58.258885   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:58.298310   69333 cri.go:89] found id: ""
	I0927 01:44:58.298338   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.298349   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:58.298359   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:58.298373   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:58.340299   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:58.340330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.395097   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:58.395130   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:58.410653   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:58.410677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:58.479437   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:58.479459   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:58.479470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.057473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:01.071746   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:01.071818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:01.112652   69333 cri.go:89] found id: ""
	I0927 01:45:01.112676   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.112684   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:01.112690   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:01.112735   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:01.146071   69333 cri.go:89] found id: ""
	I0927 01:45:01.146100   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.146111   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:01.146119   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:01.146188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:01.188640   69333 cri.go:89] found id: ""
	I0927 01:45:01.188663   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.188673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:01.188679   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:01.188743   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:01.225024   69333 cri.go:89] found id: ""
	I0927 01:45:01.225050   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.225060   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:01.225067   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:01.225128   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:01.262459   69333 cri.go:89] found id: ""
	I0927 01:45:01.262487   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.262498   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:01.262505   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:01.262560   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:01.298567   69333 cri.go:89] found id: ""
	I0927 01:45:01.298588   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.298597   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:01.298603   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:01.298647   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:01.335051   69333 cri.go:89] found id: ""
	I0927 01:45:01.335084   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.335094   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:01.335100   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:01.335149   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:01.371187   69333 cri.go:89] found id: ""
	I0927 01:45:01.371217   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.371227   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:01.371237   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:01.371252   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:01.385163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:01.385189   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:01.457256   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:01.457298   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:01.457313   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.537788   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:01.537819   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:01.580645   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:01.580672   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.521367   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.020826   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.021213   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:00.808021   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.307242   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.542831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.042010   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.131877   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:04.145175   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:04.145248   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:04.179508   69333 cri.go:89] found id: ""
	I0927 01:45:04.179535   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.179545   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:04.179552   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:04.179612   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:04.213497   69333 cri.go:89] found id: ""
	I0927 01:45:04.213533   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.213544   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:04.213551   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:04.213606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:04.249708   69333 cri.go:89] found id: ""
	I0927 01:45:04.249737   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.249747   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:04.249754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:04.249824   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:04.288283   69333 cri.go:89] found id: ""
	I0927 01:45:04.288306   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.288314   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:04.288319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:04.288368   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:04.325515   69333 cri.go:89] found id: ""
	I0927 01:45:04.325539   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.325549   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:04.325560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:04.325618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:04.363485   69333 cri.go:89] found id: ""
	I0927 01:45:04.363511   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.363521   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:04.363528   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:04.363586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:04.398834   69333 cri.go:89] found id: ""
	I0927 01:45:04.398863   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.398875   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:04.398882   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:04.398948   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:04.433408   69333 cri.go:89] found id: ""
	I0927 01:45:04.433435   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.433443   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:04.433451   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:04.433461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:04.485354   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:04.485392   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:04.499007   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:04.499031   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:04.569376   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:04.569405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:04.569420   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:04.646614   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:04.646651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.186491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:07.200510   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:07.200575   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:07.239519   69333 cri.go:89] found id: ""
	I0927 01:45:07.239542   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.239553   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:07.239562   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:07.239751   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:07.276820   69333 cri.go:89] found id: ""
	I0927 01:45:07.276854   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.276863   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:07.276870   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:07.276932   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:07.312580   69333 cri.go:89] found id: ""
	I0927 01:45:07.312604   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.312613   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:07.312619   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:07.312676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:05.520930   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.020001   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:05.807739   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.807914   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:06.042390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.542149   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.542438   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.350763   69333 cri.go:89] found id: ""
	I0927 01:45:07.350788   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.350799   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:07.350806   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:07.350861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:07.385347   69333 cri.go:89] found id: ""
	I0927 01:45:07.385376   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.385383   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:07.385389   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:07.385439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:07.420665   69333 cri.go:89] found id: ""
	I0927 01:45:07.420696   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.420708   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:07.420718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:07.420768   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:07.453707   69333 cri.go:89] found id: ""
	I0927 01:45:07.453737   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.453746   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:07.453752   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:07.453806   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:07.489467   69333 cri.go:89] found id: ""
	I0927 01:45:07.489497   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.489508   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:07.489520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:07.489531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:07.569464   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:07.569496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.609123   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:07.609160   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:07.659556   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:07.659590   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:07.673163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:07.673191   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:07.751340   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.252511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:10.266651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:10.266706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:10.304131   69333 cri.go:89] found id: ""
	I0927 01:45:10.304160   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.304171   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:10.304178   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:10.304243   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:10.339267   69333 cri.go:89] found id: ""
	I0927 01:45:10.339295   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.339321   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:10.339329   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:10.339397   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:10.376268   69333 cri.go:89] found id: ""
	I0927 01:45:10.376298   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.376308   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:10.376319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:10.376380   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:10.413944   69333 cri.go:89] found id: ""
	I0927 01:45:10.413970   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.413978   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:10.413984   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:10.414033   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:10.449205   69333 cri.go:89] found id: ""
	I0927 01:45:10.449226   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.449234   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:10.449240   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:10.449289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:10.487927   69333 cri.go:89] found id: ""
	I0927 01:45:10.487947   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.487955   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:10.487961   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:10.488018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:10.525062   69333 cri.go:89] found id: ""
	I0927 01:45:10.525085   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.525095   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:10.525102   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:10.525163   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:10.560718   69333 cri.go:89] found id: ""
	I0927 01:45:10.560768   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.560779   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:10.560790   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:10.560803   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:10.641755   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.641781   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:10.641796   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:10.719775   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:10.719807   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:10.761952   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:10.761978   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:10.815296   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:10.815330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:10.023849   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.520577   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.307967   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.807872   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:14.808602   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:13.041469   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:15.036533   69234 pod_ready.go:82] duration metric: took 4m0.000873058s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	E0927 01:45:15.036568   69234 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:45:15.036588   69234 pod_ready.go:39] duration metric: took 4m6.530278971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:15.036645   69234 kubeadm.go:597] duration metric: took 4m16.375010355s to restartPrimaryControlPlane
	W0927 01:45:15.036713   69234 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:15.036743   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:13.330300   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:13.343840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:13.343893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:13.378904   69333 cri.go:89] found id: ""
	I0927 01:45:13.378933   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.378944   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:13.378952   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:13.379010   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:13.417375   69333 cri.go:89] found id: ""
	I0927 01:45:13.417403   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.417415   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:13.417422   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:13.417482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:13.456265   69333 cri.go:89] found id: ""
	I0927 01:45:13.456291   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.456302   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:13.456310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:13.456358   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:13.502205   69333 cri.go:89] found id: ""
	I0927 01:45:13.502229   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.502240   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:13.502247   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:13.502310   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:13.543617   69333 cri.go:89] found id: ""
	I0927 01:45:13.543642   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.543652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:13.543660   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:13.543723   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:13.580268   69333 cri.go:89] found id: ""
	I0927 01:45:13.580295   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.580305   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:13.580313   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:13.580374   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:13.616681   69333 cri.go:89] found id: ""
	I0927 01:45:13.616705   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.616713   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:13.616718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:13.616765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:13.653389   69333 cri.go:89] found id: ""
	I0927 01:45:13.653412   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.653420   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:13.653430   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:13.653442   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:13.666511   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:13.666534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:13.742282   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:13.742300   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:13.742311   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:13.825800   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:13.825836   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:13.876345   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:13.876376   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.429245   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:16.443286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:16.443366   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:16.481601   69333 cri.go:89] found id: ""
	I0927 01:45:16.481626   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.481637   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:16.481645   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:16.481703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:16.513626   69333 cri.go:89] found id: ""
	I0927 01:45:16.513652   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.513659   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:16.513665   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:16.513710   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:16.552531   69333 cri.go:89] found id: ""
	I0927 01:45:16.552565   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.552574   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:16.552580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:16.552636   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:16.587252   69333 cri.go:89] found id: ""
	I0927 01:45:16.587282   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.587294   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:16.587316   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:16.587377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:16.628376   69333 cri.go:89] found id: ""
	I0927 01:45:16.628401   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.628410   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:16.628417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:16.628482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:16.669603   69333 cri.go:89] found id: ""
	I0927 01:45:16.669639   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.669651   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:16.669658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:16.669731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:16.705581   69333 cri.go:89] found id: ""
	I0927 01:45:16.705607   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.705618   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:16.705626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:16.705682   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:16.740710   69333 cri.go:89] found id: ""
	I0927 01:45:16.740735   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.740743   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:16.740759   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:16.740771   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.791025   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:16.791060   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:16.805990   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:16.806023   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:16.878313   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:16.878331   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:16.878346   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:16.966228   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:16.966269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:14.521852   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:16.522127   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:17.307853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.308018   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.512044   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:19.526801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:19.526862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:19.562063   69333 cri.go:89] found id: ""
	I0927 01:45:19.562089   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.562098   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:19.562104   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:19.562159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:19.598600   69333 cri.go:89] found id: ""
	I0927 01:45:19.598626   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.598634   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:19.598642   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:19.598712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:19.632544   69333 cri.go:89] found id: ""
	I0927 01:45:19.632564   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.632572   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:19.632577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:19.632635   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:19.671676   69333 cri.go:89] found id: ""
	I0927 01:45:19.671703   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.671713   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:19.671721   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:19.671779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:19.710321   69333 cri.go:89] found id: ""
	I0927 01:45:19.710351   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.710362   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:19.710370   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:19.710438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:19.746252   69333 cri.go:89] found id: ""
	I0927 01:45:19.746277   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.746288   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:19.746295   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:19.746354   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:19.783089   69333 cri.go:89] found id: ""
	I0927 01:45:19.783112   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.783121   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:19.783126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:19.783189   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:19.821090   69333 cri.go:89] found id: ""
	I0927 01:45:19.821117   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.821126   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:19.821134   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:19.821145   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:19.873539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:19.873575   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:19.888446   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:19.888471   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:19.958009   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:19.958034   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:19.958050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:20.037552   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:20.037587   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:19.022216   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.520606   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.808178   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:23.808273   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:22.579288   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:22.592789   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:22.592846   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:22.628148   69333 cri.go:89] found id: ""
	I0927 01:45:22.628178   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.628186   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:22.628193   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:22.628240   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:22.664162   69333 cri.go:89] found id: ""
	I0927 01:45:22.664186   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.664194   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:22.664200   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:22.664253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:22.702077   69333 cri.go:89] found id: ""
	I0927 01:45:22.702104   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.702115   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:22.702123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:22.702183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:22.739657   69333 cri.go:89] found id: ""
	I0927 01:45:22.739690   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.739700   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:22.739708   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:22.739773   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:22.774109   69333 cri.go:89] found id: ""
	I0927 01:45:22.774137   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.774148   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:22.774174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:22.774229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:22.809648   69333 cri.go:89] found id: ""
	I0927 01:45:22.809671   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.809678   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:22.809684   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:22.809729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:22.842598   69333 cri.go:89] found id: ""
	I0927 01:45:22.842620   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.842627   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:22.842632   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:22.842677   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:22.877336   69333 cri.go:89] found id: ""
	I0927 01:45:22.877364   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.877374   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:22.877382   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:22.877393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:22.930364   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:22.930395   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:22.944174   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:22.944200   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:23.025495   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:23.025520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:23.025534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:23.101813   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:23.101850   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:25.644577   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:25.657820   69333 kubeadm.go:597] duration metric: took 4m3.277962916s to restartPrimaryControlPlane
	W0927 01:45:25.657898   69333 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:25.657929   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:26.111439   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:26.128279   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:26.138354   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:26.148116   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:26.148132   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:26.148170   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:26.157965   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:26.158012   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:26.168349   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:26.177624   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:26.177692   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:26.187584   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.196800   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:26.196856   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.205894   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:26.215316   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:26.215365   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:26.224989   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:26.299149   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:45:26.299261   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:26.451113   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:26.451282   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:26.451457   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:45:26.637960   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:26.640682   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:26.640782   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:26.640865   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:26.640972   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:26.641099   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:26.641233   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:26.641317   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:26.641425   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:26.641525   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:26.641633   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:26.641901   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:26.642000   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:26.642080   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:26.782585   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:27.008743   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:27.103701   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:27.217999   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:27.238810   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:27.240191   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:27.240240   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:27.375215   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:23.521301   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.020002   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.021215   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.306744   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.308577   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:27.376992   69333 out.go:235]   - Booting up control plane ...
	I0927 01:45:27.377123   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:27.386897   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:27.387959   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:27.388954   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:27.392182   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:45:30.520717   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.019981   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:30.808251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.307139   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.020640   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.520220   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.307871   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.808604   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:41.262067   69234 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225299595s)
	I0927 01:45:41.262142   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:41.294256   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:41.304403   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:41.314288   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:41.314310   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:41.314357   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:41.323280   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:41.323335   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:41.332637   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:41.341492   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:41.341552   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:41.352259   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.361190   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:41.361244   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.370863   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:41.379674   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:41.379735   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:41.389169   69234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:41.434391   69234 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:45:41.434565   69234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:41.537712   69234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:41.537813   69234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:41.537951   69234 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:45:41.546906   69234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:41.548799   69234 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:41.548882   69234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:41.548959   69234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:41.549049   69234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:41.549133   69234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:41.549239   69234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:41.549328   69234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:41.549433   69234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:41.549531   69234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:41.549619   69234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:41.549691   69234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:41.549741   69234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:41.549813   69234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:41.594579   69234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:41.703970   69234 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:45:41.813013   69234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:41.875564   69234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:42.025627   69234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:42.026325   69234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:42.028784   69234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:39.521118   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.020563   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:40.307764   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.307974   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:44.808238   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.030464   69234 out.go:235]   - Booting up control plane ...
	I0927 01:45:42.030566   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:42.030674   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:42.031152   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:42.050207   69234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:42.058709   69234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:42.058766   69234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:42.192498   69234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:45:42.192628   69234 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:45:42.694670   69234 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.189114ms
	I0927 01:45:42.694812   69234 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:45:48.195975   69234 kubeadm.go:310] [api-check] The API server is healthy after 5.501110293s
	I0927 01:45:48.210406   69234 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:45:48.231678   69234 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:45:48.257669   69234 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:45:48.257859   69234 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-245911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:45:48.271429   69234 kubeadm.go:310] [bootstrap-token] Using token: bqds0t.3lt1vhl3zjbrkom6
	I0927 01:45:44.021019   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:46.520158   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:48.272667   69234 out.go:235]   - Configuring RBAC rules ...
	I0927 01:45:48.272775   69234 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:45:48.278773   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:45:48.290868   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:45:48.297879   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:45:48.302011   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:45:48.306217   69234 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:45:48.604161   69234 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:45:49.041505   69234 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:45:49.604127   69234 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:45:49.604867   69234 kubeadm.go:310] 
	I0927 01:45:49.604981   69234 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:45:49.605008   69234 kubeadm.go:310] 
	I0927 01:45:49.605136   69234 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:45:49.605147   69234 kubeadm.go:310] 
	I0927 01:45:49.605188   69234 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:45:49.605266   69234 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:45:49.605363   69234 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:45:49.605373   69234 kubeadm.go:310] 
	I0927 01:45:49.605446   69234 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:45:49.605455   69234 kubeadm.go:310] 
	I0927 01:45:49.605524   69234 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:45:49.605537   69234 kubeadm.go:310] 
	I0927 01:45:49.605612   69234 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:45:49.605725   69234 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:45:49.605826   69234 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:45:49.605836   69234 kubeadm.go:310] 
	I0927 01:45:49.605913   69234 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:45:49.606010   69234 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:45:49.606032   69234 kubeadm.go:310] 
	I0927 01:45:49.606130   69234 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606252   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:45:49.606276   69234 kubeadm.go:310] 	--control-plane 
	I0927 01:45:49.606282   69234 kubeadm.go:310] 
	I0927 01:45:49.606404   69234 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:45:49.606421   69234 kubeadm.go:310] 
	I0927 01:45:49.606546   69234 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606692   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:45:49.607952   69234 kubeadm.go:310] W0927 01:45:41.410128    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608322   69234 kubeadm.go:310] W0927 01:45:41.412009    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608494   69234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:45:49.608518   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:45:49.608527   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:45:49.610175   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:45:47.307006   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.307374   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.611562   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:45:49.622683   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:45:49.642326   69234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:45:49.642366   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:49.642393   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-245911 minikube.k8s.io/updated_at=2024_09_27T01_45_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=embed-certs-245911 minikube.k8s.io/primary=true
	I0927 01:45:49.677602   69234 ops.go:34] apiserver oom_adj: -16
	I0927 01:45:49.854320   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:50.355392   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:48.520718   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.520908   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.020638   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.854364   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.355074   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.855077   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.354509   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.855229   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.355204   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.854829   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:54.066909   69234 kubeadm.go:1113] duration metric: took 4.424595735s to wait for elevateKubeSystemPrivileges
	I0927 01:45:54.066954   69234 kubeadm.go:394] duration metric: took 4m55.454404762s to StartCluster
	I0927 01:45:54.066978   69234 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.067071   69234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:45:54.069732   69234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.070048   69234 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:45:54.070126   69234 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:45:54.070235   69234 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-245911"
	I0927 01:45:54.070257   69234 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-245911"
	I0927 01:45:54.070261   69234 addons.go:69] Setting default-storageclass=true in profile "embed-certs-245911"
	I0927 01:45:54.070270   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:45:54.070270   69234 addons.go:69] Setting metrics-server=true in profile "embed-certs-245911"
	I0927 01:45:54.070286   69234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-245911"
	I0927 01:45:54.070296   69234 addons.go:234] Setting addon metrics-server=true in "embed-certs-245911"
	W0927 01:45:54.070305   69234 addons.go:243] addon metrics-server should already be in state true
	W0927 01:45:54.070266   69234 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070750   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070790   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070753   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070850   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070889   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070936   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.071693   69234 out.go:177] * Verifying Kubernetes components...
	I0927 01:45:54.073034   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:45:54.087559   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38159
	I0927 01:45:54.087567   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0927 01:45:54.088061   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088074   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0927 01:45:54.088183   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088412   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088551   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088573   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088635   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088655   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088852   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088874   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088929   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089023   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.089193   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089585   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089610   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089627   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.089639   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.092683   69234 addons.go:234] Setting addon default-storageclass=true in "embed-certs-245911"
	W0927 01:45:54.092705   69234 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:45:54.092729   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.093065   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.093102   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.106496   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0927 01:45:54.106952   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.107486   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.107513   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.108098   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.108297   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.109993   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.110532   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0927 01:45:54.111066   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.111688   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.111708   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.111909   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0927 01:45:54.112156   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112338   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.112740   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.112751   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.112832   69234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:45:54.112953   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.113345   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.113372   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.114353   69234 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.114372   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:45:54.114392   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.114596   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.116175   69234 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:45:51.806801   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.808476   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:54.117315   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:45:54.117326   69234 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:45:54.117341   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.120242   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.120881   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.120903   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121161   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121224   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121452   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.121658   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.121747   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.121944   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121960   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.121677   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.122386   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.122518   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.122695   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.135920   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0927 01:45:54.136247   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.136682   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.136696   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.136971   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.137163   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.138640   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.138903   69234 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.138919   69234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:45:54.138936   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.141420   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141786   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.141803   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141966   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.142132   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.142235   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.142308   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.325790   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:45:54.375616   69234 node_ready.go:35] waiting up to 6m0s for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386626   69234 node_ready.go:49] node "embed-certs-245911" has status "Ready":"True"
	I0927 01:45:54.386646   69234 node_ready.go:38] duration metric: took 10.995073ms for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386654   69234 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:54.394605   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:45:54.458245   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.501624   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:45:54.501655   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:45:54.508690   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.548168   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:45:54.548194   69234 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:45:54.615565   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:54.615591   69234 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:45:54.655649   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:55.488749   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488849   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.488803   69234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030519069s)
	I0927 01:45:55.488934   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488942   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489266   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489282   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489290   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489298   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489377   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489393   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489401   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489409   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489511   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489528   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489540   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491047   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491082   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.491093   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.535220   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.535240   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.535604   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.535625   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.627642   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.627663   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628020   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.628025   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628047   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628055   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.628062   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628294   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628311   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628322   69234 addons.go:475] Verifying addon metrics-server=true in "embed-certs-245911"
	I0927 01:45:55.629802   69234 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0927 01:45:55.022054   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:57.520749   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:56.307903   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.807972   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:55.631245   69234 addons.go:510] duration metric: took 1.561128577s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0927 01:45:56.401813   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.900688   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:59.521353   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.014813   69534 pod_ready.go:82] duration metric: took 4m0.000584515s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:00.014858   69534 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:46:00.014878   69534 pod_ready.go:39] duration metric: took 4m13.043107791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:00.014903   69534 kubeadm.go:597] duration metric: took 4m20.409702758s to restartPrimaryControlPlane
	W0927 01:46:00.014956   69534 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:46:00.014980   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:46:00.808408   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.808672   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.901714   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.902242   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:03.401910   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.401936   69234 pod_ready.go:82] duration metric: took 9.007296678s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.401948   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908874   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.908896   69234 pod_ready.go:82] duration metric: took 506.941437ms for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908918   69234 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914117   69234 pod_ready.go:93] pod "etcd-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.914135   69234 pod_ready.go:82] duration metric: took 5.210078ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914142   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918778   69234 pod_ready.go:93] pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.918801   69234 pod_ready.go:82] duration metric: took 4.651828ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918812   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.923979   69234 pod_ready.go:93] pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.923996   69234 pod_ready.go:82] duration metric: took 5.176348ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.924004   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199586   69234 pod_ready.go:93] pod "kube-proxy-5l299" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.199612   69234 pod_ready.go:82] duration metric: took 275.601068ms for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199621   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598852   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.598880   69234 pod_ready.go:82] duration metric: took 399.251298ms for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598890   69234 pod_ready.go:39] duration metric: took 10.212226661s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:04.598905   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:04.598962   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:04.615194   69234 api_server.go:72] duration metric: took 10.545103977s to wait for apiserver process to appear ...
	I0927 01:46:04.615225   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:04.615248   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:46:04.621164   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:46:04.622001   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:04.622022   69234 api_server.go:131] duration metric: took 6.789717ms to wait for apiserver health ...
	I0927 01:46:04.622032   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:04.802641   69234 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:04.802674   69234 system_pods.go:61] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:04.802681   69234 system_pods.go:61] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:04.802687   69234 system_pods.go:61] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:04.802693   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:04.802699   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:04.802703   69234 system_pods.go:61] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:04.802708   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:04.802717   69234 system_pods.go:61] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:04.802722   69234 system_pods.go:61] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:04.802735   69234 system_pods.go:74] duration metric: took 180.694209ms to wait for pod list to return data ...
	I0927 01:46:04.802747   69234 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:04.999578   69234 default_sa.go:45] found service account: "default"
	I0927 01:46:04.999603   69234 default_sa.go:55] duration metric: took 196.845725ms for default service account to be created ...
	I0927 01:46:04.999612   69234 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:05.201201   69234 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:05.201228   69234 system_pods.go:89] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:05.201233   69234 system_pods.go:89] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:05.201237   69234 system_pods.go:89] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:05.201241   69234 system_pods.go:89] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:05.201244   69234 system_pods.go:89] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:05.201248   69234 system_pods.go:89] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:05.201251   69234 system_pods.go:89] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:05.201256   69234 system_pods.go:89] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:05.201260   69234 system_pods.go:89] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:05.201268   69234 system_pods.go:126] duration metric: took 201.651734ms to wait for k8s-apps to be running ...
	I0927 01:46:05.201275   69234 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:05.201315   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:05.216216   69234 system_svc.go:56] duration metric: took 14.930697ms WaitForService to wait for kubelet
	I0927 01:46:05.216248   69234 kubeadm.go:582] duration metric: took 11.146166369s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:05.216271   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:05.400667   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:05.400695   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:05.400708   69234 node_conditions.go:105] duration metric: took 184.432904ms to run NodePressure ...
	I0927 01:46:05.400719   69234 start.go:241] waiting for startup goroutines ...
	I0927 01:46:05.400729   69234 start.go:246] waiting for cluster config update ...
	I0927 01:46:05.400743   69234 start.go:255] writing updated cluster config ...
	I0927 01:46:05.401134   69234 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:05.452606   69234 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:05.454631   69234 out.go:177] * Done! kubectl is now configured to use "embed-certs-245911" cluster and "default" namespace by default
	I0927 01:46:05.307371   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.807981   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.393548   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:46:07.394304   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:07.394505   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:10.307311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.308085   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:14.308664   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.395176   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:12.395434   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:16.807116   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:18.807652   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:21.307348   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:23.807597   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:26.304067   69534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.289064717s)
	I0927 01:46:26.304150   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:26.341383   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:46:26.365985   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:46:26.382056   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:46:26.382082   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:46:26.382133   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:46:26.405820   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:46:26.405881   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:46:26.416355   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:46:26.426710   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:46:26.426759   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:46:26.438110   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.448631   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:46:26.448691   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.458453   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:46:26.467677   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:46:26.467724   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:46:26.478333   69534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:46:26.528377   69534 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:46:26.528432   69534 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:46:26.653799   69534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:46:26.653904   69534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:46:26.654029   69534 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:46:26.666791   69534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:46:22.395858   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:22.396073   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:26.668660   69534 out.go:235]   - Generating certificates and keys ...
	I0927 01:46:26.668739   69534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:46:26.668803   69534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:46:26.668918   69534 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:46:26.669012   69534 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:46:26.669103   69534 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:46:26.669178   69534 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:46:26.669308   69534 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:46:26.669628   69534 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:46:26.669868   69534 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:46:26.670086   69534 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:46:26.670284   69534 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:46:26.670395   69534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:46:26.885345   69534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:46:27.061416   69534 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:46:27.347409   69534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:46:27.477340   69534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:46:27.607326   69534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:46:27.607882   69534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:46:27.612459   69534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:46:27.614167   69534 out.go:235]   - Booting up control plane ...
	I0927 01:46:27.614285   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:46:27.614388   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:46:27.614482   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:46:27.635734   69534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:46:27.642550   69534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:46:27.642634   69534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:46:27.778616   69534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:46:27.778763   69534 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:46:28.280057   69534 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.328597ms
	I0927 01:46:28.280185   69534 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:46:25.808311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:28.307033   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:33.781107   69534 kubeadm.go:310] [api-check] The API server is healthy after 5.501552407s
	I0927 01:46:33.796672   69534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:46:33.809900   69534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:46:33.845968   69534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:46:33.846194   69534 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-368295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:46:33.862294   69534 kubeadm.go:310] [bootstrap-token] Using token: qmzafx.lhyo0l65zryygr2x
	I0927 01:46:30.308436   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809032   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809057   68676 pod_ready.go:82] duration metric: took 4m0.007962887s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:32.809066   68676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:46:32.809075   68676 pod_ready.go:39] duration metric: took 4m5.043455674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:32.809088   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:32.809115   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:32.809175   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:32.871610   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:32.871629   68676 cri.go:89] found id: ""
	I0927 01:46:32.871636   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:32.871682   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.878223   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:32.878296   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:32.925139   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:32.925173   68676 cri.go:89] found id: ""
	I0927 01:46:32.925182   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:32.925238   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.929961   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:32.930023   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:32.969777   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:32.969799   68676 cri.go:89] found id: ""
	I0927 01:46:32.969807   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:32.969854   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.979003   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:32.979088   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:33.029458   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:33.029532   68676 cri.go:89] found id: ""
	I0927 01:46:33.029546   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:33.029609   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.036703   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:33.036777   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:33.085041   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:33.085058   68676 cri.go:89] found id: ""
	I0927 01:46:33.085065   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:33.085125   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.090305   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:33.090372   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:33.136837   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.136857   68676 cri.go:89] found id: ""
	I0927 01:46:33.136865   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:33.136913   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.141483   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:33.141543   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:33.182913   68676 cri.go:89] found id: ""
	I0927 01:46:33.182939   68676 logs.go:276] 0 containers: []
	W0927 01:46:33.182950   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:33.182956   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:33.183002   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:33.237031   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:33.237055   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.237061   68676 cri.go:89] found id: ""
	I0927 01:46:33.237070   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:33.237121   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.241969   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.246733   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:33.246760   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:33.294096   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:33.294128   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.357981   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:33.358029   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.397465   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:33.397500   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:33.922831   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:33.922869   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:34.067117   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:34.067152   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:34.082191   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:34.082218   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:34.126416   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:34.126454   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:34.166714   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:34.166744   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:34.206601   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:34.206642   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:34.254352   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:34.254383   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:34.293318   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:34.293347   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:34.340365   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:34.340398   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:33.863782   69534 out.go:235]   - Configuring RBAC rules ...
	I0927 01:46:33.863922   69534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:46:33.871841   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:46:33.880047   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:46:33.884688   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:46:33.892057   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:46:33.895787   69534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:46:34.190553   69534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:46:34.619922   69534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:46:35.188452   69534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:46:35.189552   69534 kubeadm.go:310] 
	I0927 01:46:35.189661   69534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:46:35.189683   69534 kubeadm.go:310] 
	I0927 01:46:35.189791   69534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:46:35.189806   69534 kubeadm.go:310] 
	I0927 01:46:35.189845   69534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:46:35.189925   69534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:46:35.190002   69534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:46:35.190016   69534 kubeadm.go:310] 
	I0927 01:46:35.190095   69534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:46:35.190104   69534 kubeadm.go:310] 
	I0927 01:46:35.190181   69534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:46:35.190193   69534 kubeadm.go:310] 
	I0927 01:46:35.190264   69534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:46:35.190387   69534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:46:35.190484   69534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:46:35.190498   69534 kubeadm.go:310] 
	I0927 01:46:35.190593   69534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:46:35.190681   69534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:46:35.190691   69534 kubeadm.go:310] 
	I0927 01:46:35.190793   69534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.190948   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:46:35.191002   69534 kubeadm.go:310] 	--control-plane 
	I0927 01:46:35.191021   69534 kubeadm.go:310] 
	I0927 01:46:35.191134   69534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:46:35.191155   69534 kubeadm.go:310] 
	I0927 01:46:35.191281   69534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.191427   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:46:35.192564   69534 kubeadm.go:310] W0927 01:46:26.480521    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.192905   69534 kubeadm.go:310] W0927 01:46:26.481198    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.193078   69534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:46:35.193093   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:46:35.193102   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:46:35.194656   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:46:35.195835   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:46:35.207162   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:46:35.225999   69534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-368295 minikube.k8s.io/updated_at=2024_09_27T01_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=default-k8s-diff-port-368295 minikube.k8s.io/primary=true
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.258203   69534 ops.go:34] apiserver oom_adj: -16
	I0927 01:46:35.425367   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.926435   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.425611   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.925505   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.426329   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.926184   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.425745   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.925572   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.425831   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.508783   69534 kubeadm.go:1113] duration metric: took 4.282764601s to wait for elevateKubeSystemPrivileges
	I0927 01:46:39.508817   69534 kubeadm.go:394] duration metric: took 4m59.95903234s to StartCluster
	I0927 01:46:39.508838   69534 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.508930   69534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:46:39.510771   69534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.511005   69534 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:46:39.511071   69534 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:46:39.511194   69534 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511214   69534 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511230   69534 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511261   69534 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511276   69534 addons.go:243] addon metrics-server should already be in state true
	I0927 01:46:39.511325   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511243   69534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-368295"
	I0927 01:46:39.511225   69534 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511515   69534 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:46:39.511538   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511223   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511818   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511844   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511877   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511905   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.513051   69534 out.go:177] * Verifying Kubernetes components...
	I0927 01:46:39.514530   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:46:39.528031   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0927 01:46:39.528033   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0927 01:46:39.528446   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528603   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528997   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529022   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529085   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529101   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529210   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0927 01:46:39.529421   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.529721   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.529743   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.529724   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.530304   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.530358   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.530308   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.530423   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.530762   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.531337   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.531389   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.533286   69534 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.533306   69534 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:46:39.533333   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.533656   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.533692   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.546657   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I0927 01:46:39.546881   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0927 01:46:39.547298   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547327   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547842   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547860   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.547860   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547876   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.548220   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548239   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.548481   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.550160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550445   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0927 01:46:39.550744   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.551173   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.551195   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.551525   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.552620   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.552652   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.552838   69534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:46:39.552916   69534 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:46:36.914500   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:36.932340   68676 api_server.go:72] duration metric: took 4m14.883408931s to wait for apiserver process to appear ...
	I0927 01:46:36.932368   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:36.932407   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:36.932465   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:36.967757   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:36.967780   68676 cri.go:89] found id: ""
	I0927 01:46:36.967787   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:36.967832   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:36.972025   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:36.972105   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:37.018403   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.018431   68676 cri.go:89] found id: ""
	I0927 01:46:37.018448   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:37.018515   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.022868   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:37.022925   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:37.062443   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.062466   68676 cri.go:89] found id: ""
	I0927 01:46:37.062474   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:37.062534   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.066617   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:37.066674   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:37.101462   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.101489   68676 cri.go:89] found id: ""
	I0927 01:46:37.101500   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:37.101557   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.105564   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:37.105620   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:37.143692   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.143719   68676 cri.go:89] found id: ""
	I0927 01:46:37.143729   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:37.143775   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.148405   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:37.148484   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:37.184914   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.184943   68676 cri.go:89] found id: ""
	I0927 01:46:37.184954   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:37.185013   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.189486   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:37.189553   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:37.235389   68676 cri.go:89] found id: ""
	I0927 01:46:37.235416   68676 logs.go:276] 0 containers: []
	W0927 01:46:37.235424   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:37.235429   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:37.235480   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:37.276239   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.276266   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.276272   68676 cri.go:89] found id: ""
	I0927 01:46:37.276282   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:37.276338   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.280381   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.284423   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:37.284440   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.319790   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:37.319816   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.358818   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:37.358843   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.398137   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:37.398168   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.458672   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:37.458720   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:37.476148   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:37.476184   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:37.604190   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:37.604223   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:37.652633   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:37.652671   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.701240   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:37.701273   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.739555   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:37.739583   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.781721   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:37.781750   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:38.209361   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:38.209399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:38.261628   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:38.261658   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:39.554328   69534 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:39.554342   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:46:39.554362   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.554446   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:46:39.554456   69534 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:46:39.554469   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.557886   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.557982   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558121   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558269   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558350   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558369   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558466   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558620   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558690   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.558740   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558797   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.559026   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.559136   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.569570   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0927 01:46:39.569981   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.570364   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.570383   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.570746   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.570890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.572537   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.572779   69534 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.572795   69534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:46:39.572815   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.575104   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.575435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575595   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.575751   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.575844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.575960   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.784965   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:46:39.820986   69534 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829323   69534 node_ready.go:49] node "default-k8s-diff-port-368295" has status "Ready":"True"
	I0927 01:46:39.829346   69534 node_ready.go:38] duration metric: took 8.333848ms for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829358   69534 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:39.836143   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:39.940697   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.955239   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:46:39.955264   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:46:40.076199   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:40.080720   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:46:40.080746   69534 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:46:40.182698   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.182720   69534 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:46:40.219231   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.431480   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.431859   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.431875   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.431875   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.431889   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431898   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.432126   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.432146   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.432189   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442440   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.442468   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.442761   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442785   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.442815   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.044597   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.044627   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.044964   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.045013   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045021   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.045033   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.045041   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.045254   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045267   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.427791   69534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.208520131s)
	I0927 01:46:41.427843   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.427859   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428175   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.428184   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428196   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428205   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.428213   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428477   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428490   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428500   69534 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-368295"
	I0927 01:46:41.430399   69534 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0927 01:46:41.431795   69534 addons.go:510] duration metric: took 1.920729429s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0927 01:46:41.844911   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:40.832698   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:46:40.838244   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:46:40.839252   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:40.839270   68676 api_server.go:131] duration metric: took 3.906895557s to wait for apiserver health ...
	I0927 01:46:40.839277   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:40.839312   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:40.839373   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:40.879726   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:40.879753   68676 cri.go:89] found id: ""
	I0927 01:46:40.879763   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:40.879822   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.884233   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:40.884301   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:40.936189   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:40.936216   68676 cri.go:89] found id: ""
	I0927 01:46:40.936226   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:40.936289   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.940805   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:40.940885   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:40.978662   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:40.978683   68676 cri.go:89] found id: ""
	I0927 01:46:40.978693   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:40.978757   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.983357   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:40.983428   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:41.027134   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.027160   68676 cri.go:89] found id: ""
	I0927 01:46:41.027170   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:41.027229   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.031909   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:41.031986   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:41.077539   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.077568   68676 cri.go:89] found id: ""
	I0927 01:46:41.077577   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:41.077638   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.082237   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:41.082314   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:41.122413   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:41.122437   68676 cri.go:89] found id: ""
	I0927 01:46:41.122446   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:41.122501   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.127807   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:41.127872   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:41.174287   68676 cri.go:89] found id: ""
	I0927 01:46:41.174320   68676 logs.go:276] 0 containers: []
	W0927 01:46:41.174331   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:41.174339   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:41.174397   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:41.213192   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.213219   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:41.213225   68676 cri.go:89] found id: ""
	I0927 01:46:41.213234   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:41.213298   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.218168   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.227165   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:41.227194   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.269538   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:41.269571   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:41.691900   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:41.691943   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:41.709639   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:41.709682   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:41.829334   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:41.829366   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:41.886517   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:41.886552   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.933012   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:41.933035   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.973881   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:41.973921   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:42.032592   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:42.032628   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:42.087817   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:42.087856   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:42.162770   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:42.162808   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:42.213367   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:42.213399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:42.254937   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:42.254963   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:44.804112   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:46:44.804146   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.804153   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.804158   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.804162   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.804167   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.804171   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.804180   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.804186   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.804196   68676 system_pods.go:74] duration metric: took 3.964911623s to wait for pod list to return data ...
	I0927 01:46:44.804208   68676 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:44.807883   68676 default_sa.go:45] found service account: "default"
	I0927 01:46:44.807907   68676 default_sa.go:55] duration metric: took 3.689984ms for default service account to be created ...
	I0927 01:46:44.807917   68676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:44.812135   68676 system_pods.go:86] 8 kube-system pods found
	I0927 01:46:44.812161   68676 system_pods.go:89] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.812167   68676 system_pods.go:89] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.812174   68676 system_pods.go:89] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.812178   68676 system_pods.go:89] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.812185   68676 system_pods.go:89] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.812190   68676 system_pods.go:89] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.812200   68676 system_pods.go:89] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.812209   68676 system_pods.go:89] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.812222   68676 system_pods.go:126] duration metric: took 4.297317ms to wait for k8s-apps to be running ...
	I0927 01:46:44.812234   68676 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:44.812282   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:44.827911   68676 system_svc.go:56] duration metric: took 15.668154ms WaitForService to wait for kubelet
	I0927 01:46:44.827946   68676 kubeadm.go:582] duration metric: took 4m22.779012486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:44.827964   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:44.830688   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:44.830707   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:44.830716   68676 node_conditions.go:105] duration metric: took 2.747178ms to run NodePressure ...
	I0927 01:46:44.830725   68676 start.go:241] waiting for startup goroutines ...
	I0927 01:46:44.830732   68676 start.go:246] waiting for cluster config update ...
	I0927 01:46:44.830742   68676 start.go:255] writing updated cluster config ...
	I0927 01:46:44.830990   68676 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:44.881491   68676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:44.884307   68676 out.go:177] * Done! kubectl is now configured to use "no-preload-521072" cluster and "default" namespace by default
	I0927 01:46:42.397038   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:42.397331   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:43.845539   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:46.343584   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:48.842505   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.842527   69534 pod_ready.go:82] duration metric: took 9.006354643s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.842537   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846753   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.846771   69534 pod_ready.go:82] duration metric: took 4.228349ms for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846780   69534 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851234   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.851256   69534 pod_ready.go:82] duration metric: took 4.468727ms for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851265   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855648   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.855669   69534 pod_ready.go:82] duration metric: took 4.398439ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855678   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860882   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.860898   69534 pod_ready.go:82] duration metric: took 5.214278ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860906   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241149   69534 pod_ready.go:93] pod "kube-proxy-kqjdq" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.241180   69534 pod_ready.go:82] duration metric: took 380.266777ms for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241192   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642403   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.642437   69534 pod_ready.go:82] duration metric: took 401.235952ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642448   69534 pod_ready.go:39] duration metric: took 9.813073515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:49.642465   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:49.642518   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:49.658847   69534 api_server.go:72] duration metric: took 10.147811957s to wait for apiserver process to appear ...
	I0927 01:46:49.658877   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:49.658898   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:46:49.665899   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:46:49.666844   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:49.666867   69534 api_server.go:131] duration metric: took 7.982491ms to wait for apiserver health ...
	I0927 01:46:49.666876   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:49.843377   69534 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:49.843402   69534 system_pods.go:61] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:49.843408   69534 system_pods.go:61] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:49.843413   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:49.843417   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:49.843420   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:49.843425   69534 system_pods.go:61] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:49.843429   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:49.843437   69534 system_pods.go:61] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:49.843443   69534 system_pods.go:61] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:49.843454   69534 system_pods.go:74] duration metric: took 176.572041ms to wait for pod list to return data ...
	I0927 01:46:49.843466   69534 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:50.041031   69534 default_sa.go:45] found service account: "default"
	I0927 01:46:50.041053   69534 default_sa.go:55] duration metric: took 197.577565ms for default service account to be created ...
	I0927 01:46:50.041062   69534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:50.243807   69534 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:50.243834   69534 system_pods.go:89] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:50.243840   69534 system_pods.go:89] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:50.243845   69534 system_pods.go:89] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:50.243849   69534 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:50.243853   69534 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:50.243856   69534 system_pods.go:89] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:50.243860   69534 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:50.243866   69534 system_pods.go:89] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:50.243869   69534 system_pods.go:89] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:50.243879   69534 system_pods.go:126] duration metric: took 202.812704ms to wait for k8s-apps to be running ...
	I0927 01:46:50.243888   69534 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:50.243931   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:50.260175   69534 system_svc.go:56] duration metric: took 16.279433ms WaitForService to wait for kubelet
	I0927 01:46:50.260203   69534 kubeadm.go:582] duration metric: took 10.749173466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:50.260220   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:50.441020   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:50.441044   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:50.441052   69534 node_conditions.go:105] duration metric: took 180.827321ms to run NodePressure ...
	I0927 01:46:50.441062   69534 start.go:241] waiting for startup goroutines ...
	I0927 01:46:50.441081   69534 start.go:246] waiting for cluster config update ...
	I0927 01:46:50.441091   69534 start.go:255] writing updated cluster config ...
	I0927 01:46:50.441338   69534 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:50.492229   69534 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:50.494198   69534 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-368295" cluster and "default" namespace by default
	I0927 01:47:22.398756   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:47:22.399035   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:47:22.399051   69333 kubeadm.go:310] 
	I0927 01:47:22.399125   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:47:22.399167   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:47:22.399176   69333 kubeadm.go:310] 
	I0927 01:47:22.399242   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:47:22.399326   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:47:22.399452   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:47:22.399464   69333 kubeadm.go:310] 
	I0927 01:47:22.399627   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:47:22.399702   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:47:22.399750   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:47:22.399763   69333 kubeadm.go:310] 
	I0927 01:47:22.399908   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:47:22.400001   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:47:22.400014   69333 kubeadm.go:310] 
	I0927 01:47:22.400109   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:47:22.400218   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:47:22.400331   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:47:22.400406   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:47:22.400414   69333 kubeadm.go:310] 
	I0927 01:47:22.401157   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:47:22.401273   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:47:22.401342   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:47:22.401458   69333 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:47:22.401498   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:47:22.863316   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:47:22.878664   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:47:22.889118   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:47:22.889135   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:47:22.889173   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:47:22.898966   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:47:22.899035   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:47:22.911280   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:47:22.920628   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:47:22.920677   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:47:22.929860   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.938794   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:47:22.938839   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.947982   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:47:22.956785   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:47:22.956837   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:47:22.966186   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:47:23.039915   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:47:23.040017   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:47:23.189097   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:47:23.189274   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:47:23.189395   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:47:23.400731   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:47:23.402659   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:47:23.402776   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:47:23.402855   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:47:23.402959   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:47:23.403040   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:47:23.403162   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:47:23.403349   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:47:23.403935   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:47:23.404260   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:47:23.404563   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:47:23.404896   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:47:23.405050   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:47:23.405121   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:47:23.466908   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:47:23.717009   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:47:23.766225   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:47:23.961488   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:47:23.987846   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:47:23.988724   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:47:23.988790   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:47:24.130550   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:47:24.132276   69333 out.go:235]   - Booting up control plane ...
	I0927 01:47:24.132386   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:47:24.146415   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:47:24.147664   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:47:24.148443   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:47:24.151623   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:48:04.153587   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:48:04.153934   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:04.154129   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:09.154634   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:09.154883   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:19.155638   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:19.155844   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:39.156224   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:39.156429   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155507   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:49:19.155754   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155779   69333 kubeadm.go:310] 
	I0927 01:49:19.155872   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:49:19.155947   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:49:19.155958   69333 kubeadm.go:310] 
	I0927 01:49:19.156026   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:49:19.156077   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:49:19.156230   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:49:19.156242   69333 kubeadm.go:310] 
	I0927 01:49:19.156379   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:49:19.156434   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:49:19.156486   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:49:19.156506   69333 kubeadm.go:310] 
	I0927 01:49:19.156628   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:49:19.156756   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:49:19.156775   69333 kubeadm.go:310] 
	I0927 01:49:19.156925   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:49:19.157022   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:49:19.157112   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:49:19.157191   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:49:19.157202   69333 kubeadm.go:310] 
	I0927 01:49:19.158023   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:49:19.158149   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:49:19.158277   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:49:19.158357   69333 kubeadm.go:394] duration metric: took 7m56.829434682s to StartCluster
	I0927 01:49:19.158404   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:49:19.158477   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:49:19.200705   69333 cri.go:89] found id: ""
	I0927 01:49:19.200729   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.200736   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:49:19.200742   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:49:19.200791   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:49:19.240252   69333 cri.go:89] found id: ""
	I0927 01:49:19.240274   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.240285   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:49:19.240292   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:49:19.240352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:49:19.275802   69333 cri.go:89] found id: ""
	I0927 01:49:19.275826   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.275834   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:49:19.275840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:49:19.275894   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:49:19.309317   69333 cri.go:89] found id: ""
	I0927 01:49:19.309342   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.309350   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:49:19.309357   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:49:19.309414   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:49:19.344778   69333 cri.go:89] found id: ""
	I0927 01:49:19.344806   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.344817   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:49:19.344823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:49:19.344882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:49:19.379394   69333 cri.go:89] found id: ""
	I0927 01:49:19.379426   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.379438   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:49:19.379445   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:49:19.379502   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:49:19.415349   69333 cri.go:89] found id: ""
	I0927 01:49:19.415376   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.415384   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:49:19.415390   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:49:19.415438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:49:19.453357   69333 cri.go:89] found id: ""
	I0927 01:49:19.453381   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.453389   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:49:19.453397   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:49:19.453409   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:49:19.530384   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:49:19.530405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:49:19.530423   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:49:19.643418   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:49:19.643453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:49:19.688825   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:49:19.688861   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:49:19.745945   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:49:19.745983   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0927 01:49:19.762685   69333 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:49:19.762739   69333 out.go:270] * 
	W0927 01:49:19.762791   69333 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.762804   69333 out.go:270] * 
	W0927 01:49:19.763605   69333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:49:19.767393   69333 out.go:201] 
	W0927 01:49:19.768622   69333 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.768671   69333 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:49:19.768690   69333 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:49:19.771036   69333 out.go:201] 
	
	
	==> CRI-O <==
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.133404511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402305133376531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7bd402e-ff9f-455a-b731-a5def67cd193 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.133989975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=489424fe-8b35-4236-89f7-d17928fdedd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.134092730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=489424fe-8b35-4236-89f7-d17928fdedd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.134150202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=489424fe-8b35-4236-89f7-d17928fdedd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.173322557Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57352fbd-3869-4183-a8c5-2539827e888c name=/runtime.v1.RuntimeService/Version
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.173444554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57352fbd-3869-4183-a8c5-2539827e888c name=/runtime.v1.RuntimeService/Version
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.175021849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d009df3c-a454-4c46-9e61-1f5b1b5e5683 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.175650077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402305175608768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d009df3c-a454-4c46-9e61-1f5b1b5e5683 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.176910742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ff8d9b5-2232-4b0c-a07d-f2d4b0daa7f3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.176997566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ff8d9b5-2232-4b0c-a07d-f2d4b0daa7f3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.177052199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8ff8d9b5-2232-4b0c-a07d-f2d4b0daa7f3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.216060679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00693d06-91af-4b2c-9792-7562001bc2aa name=/runtime.v1.RuntimeService/Version
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.216158434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00693d06-91af-4b2c-9792-7562001bc2aa name=/runtime.v1.RuntimeService/Version
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.217300696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1a5e26e-a3f5-4adf-8ba0-1ecacf4296c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.217743713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402305217717312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1a5e26e-a3f5-4adf-8ba0-1ecacf4296c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.218317152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bcd54c4-5b1c-41ff-85ad-ca75d70ff354 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.218387854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bcd54c4-5b1c-41ff-85ad-ca75d70ff354 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.218421613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5bcd54c4-5b1c-41ff-85ad-ca75d70ff354 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.251486432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f824183-19e1-4ed2-ae66-6b40e05324b8 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.251578795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f824183-19e1-4ed2-ae66-6b40e05324b8 name=/runtime.v1.RuntimeService/Version
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.253061267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1e844ea-2389-4ef7-9a41-bfb3e0daa91d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.253557741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402305253534174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1e844ea-2389-4ef7-9a41-bfb3e0daa91d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.254174510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e928d58-549a-4fee-8309-0d3a588780a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.254233166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e928d58-549a-4fee-8309-0d3a588780a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 01:58:25 old-k8s-version-612261 crio[628]: time="2024-09-27 01:58:25.254271515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5e928d58-549a-4fee-8309-0d3a588780a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep27 01:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051380] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040023] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep27 01:41] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.490738] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.637888] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.070410] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.081325] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.210782] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.144654] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.262711] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.839165] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064025] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.828367] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +11.175171] kauditd_printk_skb: 46 callbacks suppressed
	[Sep27 01:45] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Sep27 01:47] systemd-fstab-generator[5347]: Ignoring "noauto" option for root device
	[  +0.069319] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:58:25 up 17 min,  0 users,  load average: 0.00, 0.02, 0.05
	Linux old-k8s-version-612261 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]: net/http.(*Transport).dialConnFor(0xc0004f4000, 0xc000c98a50)
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]: created by net/http.(*Transport).queueForDial
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]: goroutine 168 [select]:
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0009724e0, 0xc000c14100, 0xc0000a8a20, 0xc0000a8960)
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]: created by net.(*netFD).connect
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]: goroutine 172 [select]:
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000972a80, 0xc000c14200, 0xc0000a9080, 0xc0000a9020)
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]: created by net.(*netFD).connect
	Sep 27 01:58:19 old-k8s-version-612261 kubelet[6528]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Sep 27 01:58:19 old-k8s-version-612261 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 27 01:58:19 old-k8s-version-612261 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 27 01:58:20 old-k8s-version-612261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Sep 27 01:58:20 old-k8s-version-612261 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 27 01:58:20 old-k8s-version-612261 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 27 01:58:20 old-k8s-version-612261 kubelet[6537]: I0927 01:58:20.501027    6537 server.go:416] Version: v1.20.0
	Sep 27 01:58:20 old-k8s-version-612261 kubelet[6537]: I0927 01:58:20.501395    6537 server.go:837] Client rotation is on, will bootstrap in background
	Sep 27 01:58:20 old-k8s-version-612261 kubelet[6537]: I0927 01:58:20.503481    6537 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 27 01:58:20 old-k8s-version-612261 kubelet[6537]: I0927 01:58:20.504647    6537 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 27 01:58:20 old-k8s-version-612261 kubelet[6537]: W0927 01:58:20.504685    6537 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 2 (215.825212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-612261" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (488.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-245911 -n embed-certs-245911
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-27 02:03:16.798050048 +0000 UTC m=+6512.983658292
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-245911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-245911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.947µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-245911 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-245911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-245911 logs -n 25: (1.332407869s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-782846 pgrep -a        | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:02 UTC | 27 Sep 24 02:02 UTC |
	|         | kubelet                        |                   |         |         |                     |                     |
	| image   | newest-cni-223910 image list   | newest-cni-223910 | jenkins | v1.34.0 | 27 Sep 24 02:02 UTC | 27 Sep 24 02:02 UTC |
	|         | --format=json                  |                   |         |         |                     |                     |
	| pause   | -p newest-cni-223910           | newest-cni-223910 | jenkins | v1.34.0 | 27 Sep 24 02:02 UTC | 27 Sep 24 02:02 UTC |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| unpause | -p newest-cni-223910           | newest-cni-223910 | jenkins | v1.34.0 | 27 Sep 24 02:02 UTC | 27 Sep 24 02:02 UTC |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| delete  | -p newest-cni-223910           | newest-cni-223910 | jenkins | v1.34.0 | 27 Sep 24 02:02 UTC | 27 Sep 24 02:02 UTC |
	| delete  | -p newest-cni-223910           | newest-cni-223910 | jenkins | v1.34.0 | 27 Sep 24 02:02 UTC | 27 Sep 24 02:02 UTC |
	| start   | -p kindnet-782846              | kindnet-782846    | jenkins | v1.34.0 | 27 Sep 24 02:02 UTC |                     |
	|         | --memory=3072                  |                   |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                   |         |         |                     |                     |
	|         | --wait-timeout=15m             |                   |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2    |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat        | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/nsswitch.conf             |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat        | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/hosts                     |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat        | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/resolv.conf               |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo crictl     | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | pods                           |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo crictl ps  | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | --all                          |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo find       | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/cni -type f -exec sh -c   |                   |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo ip a s     | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	| ssh     | -p auto-782846 sudo ip r s     | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	| ssh     | -p auto-782846 sudo            | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | iptables-save                  |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo iptables   | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | -t nat -L -n -v                |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl  | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | status kubelet --all --full    |                   |         |         |                     |                     |
	|         | --no-pager                     |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl  | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | cat kubelet --no-pager         |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo journalctl | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | -xeu kubelet --all --full      |                   |         |         |                     |                     |
	|         | --no-pager                     |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat        | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/kubernetes/kubelet.conf   |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat        | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /var/lib/kubelet/config.yaml   |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl  | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | status docker --all --full     |                   |         |         |                     |                     |
	|         | --no-pager                     |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl  | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | cat docker --no-pager          |                   |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat        | auto-782846       | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | /etc/docker/daemon.json        |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 02:02:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 02:02:54.274952   77529 out.go:345] Setting OutFile to fd 1 ...
	I0927 02:02:54.275085   77529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 02:02:54.275096   77529 out.go:358] Setting ErrFile to fd 2...
	I0927 02:02:54.275101   77529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 02:02:54.275345   77529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 02:02:54.276006   77529 out.go:352] Setting JSON to false
	I0927 02:02:54.277025   77529 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9919,"bootTime":1727392655,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 02:02:54.277123   77529 start.go:139] virtualization: kvm guest
	I0927 02:02:54.279184   77529 out.go:177] * [kindnet-782846] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 02:02:54.280435   77529 notify.go:220] Checking for updates...
	I0927 02:02:54.280468   77529 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 02:02:54.281664   77529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 02:02:54.282941   77529 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 02:02:54.284248   77529 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:02:54.285406   77529 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 02:02:54.286509   77529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 02:02:54.288072   77529 config.go:182] Loaded profile config "auto-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:02:54.288179   77529 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:02:54.288273   77529 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:02:54.288390   77529 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 02:02:54.324950   77529 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 02:02:54.326215   77529 start.go:297] selected driver: kvm2
	I0927 02:02:54.326225   77529 start.go:901] validating driver "kvm2" against <nil>
	I0927 02:02:54.326235   77529 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 02:02:54.326926   77529 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 02:02:54.326991   77529 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 02:02:54.341837   77529 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 02:02:54.341897   77529 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 02:02:54.342137   77529 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 02:02:54.342167   77529 cni.go:84] Creating CNI manager for "kindnet"
	I0927 02:02:54.342171   77529 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 02:02:54.342225   77529 start.go:340] cluster config:
	{Name:kindnet-782846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-782846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 02:02:54.342315   77529 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 02:02:54.343980   77529 out.go:177] * Starting "kindnet-782846" primary control-plane node in "kindnet-782846" cluster
	I0927 02:02:54.345130   77529 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 02:02:54.345169   77529 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 02:02:54.345178   77529 cache.go:56] Caching tarball of preloaded images
	I0927 02:02:54.345247   77529 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 02:02:54.345257   77529 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 02:02:54.345342   77529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/config.json ...
	I0927 02:02:54.345366   77529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/config.json: {Name:mk9c4c61cd9ffebc4112794d75bd842f46d7fee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:02:54.345493   77529 start.go:360] acquireMachinesLock for kindnet-782846: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 02:02:54.345523   77529 start.go:364] duration metric: took 17.827µs to acquireMachinesLock for "kindnet-782846"
	I0927 02:02:54.345538   77529 start.go:93] Provisioning new machine with config: &{Name:kindnet-782846 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:kindnet-782846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 02:02:54.345594   77529 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 02:02:54.347231   77529 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 02:02:54.347394   77529 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 02:02:54.347437   77529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 02:02:54.362196   77529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0927 02:02:54.362624   77529 main.go:141] libmachine: () Calling .GetVersion
	I0927 02:02:54.363221   77529 main.go:141] libmachine: Using API Version  1
	I0927 02:02:54.363242   77529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 02:02:54.363580   77529 main.go:141] libmachine: () Calling .GetMachineName
	I0927 02:02:54.363781   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetMachineName
	I0927 02:02:54.363929   77529 main.go:141] libmachine: (kindnet-782846) Calling .DriverName
	I0927 02:02:54.364060   77529 start.go:159] libmachine.API.Create for "kindnet-782846" (driver="kvm2")
	I0927 02:02:54.364086   77529 client.go:168] LocalClient.Create starting
	I0927 02:02:54.364111   77529 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 02:02:54.364141   77529 main.go:141] libmachine: Decoding PEM data...
	I0927 02:02:54.364154   77529 main.go:141] libmachine: Parsing certificate...
	I0927 02:02:54.364201   77529 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 02:02:54.364223   77529 main.go:141] libmachine: Decoding PEM data...
	I0927 02:02:54.364233   77529 main.go:141] libmachine: Parsing certificate...
	I0927 02:02:54.364248   77529 main.go:141] libmachine: Running pre-create checks...
	I0927 02:02:54.364256   77529 main.go:141] libmachine: (kindnet-782846) Calling .PreCreateCheck
	I0927 02:02:54.364595   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetConfigRaw
	I0927 02:02:54.364945   77529 main.go:141] libmachine: Creating machine...
	I0927 02:02:54.364957   77529 main.go:141] libmachine: (kindnet-782846) Calling .Create
	I0927 02:02:54.365095   77529 main.go:141] libmachine: (kindnet-782846) Creating KVM machine...
	I0927 02:02:54.366234   77529 main.go:141] libmachine: (kindnet-782846) DBG | found existing default KVM network
	I0927 02:02:54.367294   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:54.367137   77569 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:66:3a:58} reservation:<nil>}
	I0927 02:02:54.368264   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:54.368168   77569 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ef:f5:70} reservation:<nil>}
	I0927 02:02:54.368991   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:54.368925   77569 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b0:e1:14} reservation:<nil>}
	I0927 02:02:54.370005   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:54.369928   77569 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b9e0}
	I0927 02:02:54.370100   77529 main.go:141] libmachine: (kindnet-782846) DBG | created network xml: 
	I0927 02:02:54.370128   77529 main.go:141] libmachine: (kindnet-782846) DBG | <network>
	I0927 02:02:54.370135   77529 main.go:141] libmachine: (kindnet-782846) DBG |   <name>mk-kindnet-782846</name>
	I0927 02:02:54.370141   77529 main.go:141] libmachine: (kindnet-782846) DBG |   <dns enable='no'/>
	I0927 02:02:54.370145   77529 main.go:141] libmachine: (kindnet-782846) DBG |   
	I0927 02:02:54.370151   77529 main.go:141] libmachine: (kindnet-782846) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0927 02:02:54.370161   77529 main.go:141] libmachine: (kindnet-782846) DBG |     <dhcp>
	I0927 02:02:54.370167   77529 main.go:141] libmachine: (kindnet-782846) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0927 02:02:54.370170   77529 main.go:141] libmachine: (kindnet-782846) DBG |     </dhcp>
	I0927 02:02:54.370177   77529 main.go:141] libmachine: (kindnet-782846) DBG |   </ip>
	I0927 02:02:54.370187   77529 main.go:141] libmachine: (kindnet-782846) DBG |   
	I0927 02:02:54.370192   77529 main.go:141] libmachine: (kindnet-782846) DBG | </network>
	I0927 02:02:54.370201   77529 main.go:141] libmachine: (kindnet-782846) DBG | 
	I0927 02:02:54.375344   77529 main.go:141] libmachine: (kindnet-782846) DBG | trying to create private KVM network mk-kindnet-782846 192.168.72.0/24...
	I0927 02:02:54.445121   77529 main.go:141] libmachine: (kindnet-782846) DBG | private KVM network mk-kindnet-782846 192.168.72.0/24 created
	I0927 02:02:54.445184   77529 main.go:141] libmachine: (kindnet-782846) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846 ...
	I0927 02:02:54.445208   77529 main.go:141] libmachine: (kindnet-782846) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 02:02:54.445224   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:54.445168   77569 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:02:54.445417   77529 main.go:141] libmachine: (kindnet-782846) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 02:02:54.689042   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:54.688916   77569 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/id_rsa...
	I0927 02:02:54.846977   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:54.846860   77569 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/kindnet-782846.rawdisk...
	I0927 02:02:54.847008   77529 main.go:141] libmachine: (kindnet-782846) DBG | Writing magic tar header
	I0927 02:02:54.847031   77529 main.go:141] libmachine: (kindnet-782846) DBG | Writing SSH key tar header
	I0927 02:02:54.847042   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:54.846971   77569 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846 ...
	I0927 02:02:54.847060   77529 main.go:141] libmachine: (kindnet-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846
	I0927 02:02:54.847093   77529 main.go:141] libmachine: (kindnet-782846) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846 (perms=drwx------)
	I0927 02:02:54.847104   77529 main.go:141] libmachine: (kindnet-782846) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 02:02:54.847117   77529 main.go:141] libmachine: (kindnet-782846) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 02:02:54.847125   77529 main.go:141] libmachine: (kindnet-782846) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 02:02:54.847132   77529 main.go:141] libmachine: (kindnet-782846) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 02:02:54.847139   77529 main.go:141] libmachine: (kindnet-782846) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 02:02:54.847145   77529 main.go:141] libmachine: (kindnet-782846) Creating domain...
	I0927 02:02:54.847157   77529 main.go:141] libmachine: (kindnet-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 02:02:54.847162   77529 main.go:141] libmachine: (kindnet-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:02:54.847169   77529 main.go:141] libmachine: (kindnet-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 02:02:54.847176   77529 main.go:141] libmachine: (kindnet-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 02:02:54.847212   77529 main.go:141] libmachine: (kindnet-782846) DBG | Checking permissions on dir: /home/jenkins
	I0927 02:02:54.847237   77529 main.go:141] libmachine: (kindnet-782846) DBG | Checking permissions on dir: /home
	I0927 02:02:54.847249   77529 main.go:141] libmachine: (kindnet-782846) DBG | Skipping /home - not owner
	I0927 02:02:54.848279   77529 main.go:141] libmachine: (kindnet-782846) define libvirt domain using xml: 
	I0927 02:02:54.848302   77529 main.go:141] libmachine: (kindnet-782846) <domain type='kvm'>
	I0927 02:02:54.848312   77529 main.go:141] libmachine: (kindnet-782846)   <name>kindnet-782846</name>
	I0927 02:02:54.848320   77529 main.go:141] libmachine: (kindnet-782846)   <memory unit='MiB'>3072</memory>
	I0927 02:02:54.848331   77529 main.go:141] libmachine: (kindnet-782846)   <vcpu>2</vcpu>
	I0927 02:02:54.848340   77529 main.go:141] libmachine: (kindnet-782846)   <features>
	I0927 02:02:54.848350   77529 main.go:141] libmachine: (kindnet-782846)     <acpi/>
	I0927 02:02:54.848354   77529 main.go:141] libmachine: (kindnet-782846)     <apic/>
	I0927 02:02:54.848359   77529 main.go:141] libmachine: (kindnet-782846)     <pae/>
	I0927 02:02:54.848363   77529 main.go:141] libmachine: (kindnet-782846)     
	I0927 02:02:54.848368   77529 main.go:141] libmachine: (kindnet-782846)   </features>
	I0927 02:02:54.848385   77529 main.go:141] libmachine: (kindnet-782846)   <cpu mode='host-passthrough'>
	I0927 02:02:54.848392   77529 main.go:141] libmachine: (kindnet-782846)   
	I0927 02:02:54.848396   77529 main.go:141] libmachine: (kindnet-782846)   </cpu>
	I0927 02:02:54.848400   77529 main.go:141] libmachine: (kindnet-782846)   <os>
	I0927 02:02:54.848405   77529 main.go:141] libmachine: (kindnet-782846)     <type>hvm</type>
	I0927 02:02:54.848410   77529 main.go:141] libmachine: (kindnet-782846)     <boot dev='cdrom'/>
	I0927 02:02:54.848416   77529 main.go:141] libmachine: (kindnet-782846)     <boot dev='hd'/>
	I0927 02:02:54.848421   77529 main.go:141] libmachine: (kindnet-782846)     <bootmenu enable='no'/>
	I0927 02:02:54.848427   77529 main.go:141] libmachine: (kindnet-782846)   </os>
	I0927 02:02:54.848431   77529 main.go:141] libmachine: (kindnet-782846)   <devices>
	I0927 02:02:54.848448   77529 main.go:141] libmachine: (kindnet-782846)     <disk type='file' device='cdrom'>
	I0927 02:02:54.848458   77529 main.go:141] libmachine: (kindnet-782846)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/boot2docker.iso'/>
	I0927 02:02:54.848465   77529 main.go:141] libmachine: (kindnet-782846)       <target dev='hdc' bus='scsi'/>
	I0927 02:02:54.848471   77529 main.go:141] libmachine: (kindnet-782846)       <readonly/>
	I0927 02:02:54.848477   77529 main.go:141] libmachine: (kindnet-782846)     </disk>
	I0927 02:02:54.848485   77529 main.go:141] libmachine: (kindnet-782846)     <disk type='file' device='disk'>
	I0927 02:02:54.848493   77529 main.go:141] libmachine: (kindnet-782846)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 02:02:54.848505   77529 main.go:141] libmachine: (kindnet-782846)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/kindnet-782846.rawdisk'/>
	I0927 02:02:54.848515   77529 main.go:141] libmachine: (kindnet-782846)       <target dev='hda' bus='virtio'/>
	I0927 02:02:54.848539   77529 main.go:141] libmachine: (kindnet-782846)     </disk>
	I0927 02:02:54.848570   77529 main.go:141] libmachine: (kindnet-782846)     <interface type='network'>
	I0927 02:02:54.848592   77529 main.go:141] libmachine: (kindnet-782846)       <source network='mk-kindnet-782846'/>
	I0927 02:02:54.848603   77529 main.go:141] libmachine: (kindnet-782846)       <model type='virtio'/>
	I0927 02:02:54.848613   77529 main.go:141] libmachine: (kindnet-782846)     </interface>
	I0927 02:02:54.848619   77529 main.go:141] libmachine: (kindnet-782846)     <interface type='network'>
	I0927 02:02:54.848625   77529 main.go:141] libmachine: (kindnet-782846)       <source network='default'/>
	I0927 02:02:54.848636   77529 main.go:141] libmachine: (kindnet-782846)       <model type='virtio'/>
	I0927 02:02:54.848643   77529 main.go:141] libmachine: (kindnet-782846)     </interface>
	I0927 02:02:54.848647   77529 main.go:141] libmachine: (kindnet-782846)     <serial type='pty'>
	I0927 02:02:54.848657   77529 main.go:141] libmachine: (kindnet-782846)       <target port='0'/>
	I0927 02:02:54.848666   77529 main.go:141] libmachine: (kindnet-782846)     </serial>
	I0927 02:02:54.848676   77529 main.go:141] libmachine: (kindnet-782846)     <console type='pty'>
	I0927 02:02:54.848686   77529 main.go:141] libmachine: (kindnet-782846)       <target type='serial' port='0'/>
	I0927 02:02:54.848694   77529 main.go:141] libmachine: (kindnet-782846)     </console>
	I0927 02:02:54.848704   77529 main.go:141] libmachine: (kindnet-782846)     <rng model='virtio'>
	I0927 02:02:54.848723   77529 main.go:141] libmachine: (kindnet-782846)       <backend model='random'>/dev/random</backend>
	I0927 02:02:54.848735   77529 main.go:141] libmachine: (kindnet-782846)     </rng>
	I0927 02:02:54.848745   77529 main.go:141] libmachine: (kindnet-782846)     
	I0927 02:02:54.848751   77529 main.go:141] libmachine: (kindnet-782846)     
	I0927 02:02:54.848760   77529 main.go:141] libmachine: (kindnet-782846)   </devices>
	I0927 02:02:54.848766   77529 main.go:141] libmachine: (kindnet-782846) </domain>
	I0927 02:02:54.848775   77529 main.go:141] libmachine: (kindnet-782846) 
	I0927 02:02:54.852972   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:69:59:f6 in network default
	I0927 02:02:54.853489   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:54.853511   77529 main.go:141] libmachine: (kindnet-782846) Ensuring networks are active...
	I0927 02:02:54.854128   77529 main.go:141] libmachine: (kindnet-782846) Ensuring network default is active
	I0927 02:02:54.854485   77529 main.go:141] libmachine: (kindnet-782846) Ensuring network mk-kindnet-782846 is active
	I0927 02:02:54.854966   77529 main.go:141] libmachine: (kindnet-782846) Getting domain xml...
	I0927 02:02:54.855717   77529 main.go:141] libmachine: (kindnet-782846) Creating domain...
	I0927 02:02:56.113265   77529 main.go:141] libmachine: (kindnet-782846) Waiting to get IP...
	I0927 02:02:56.114046   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:56.114437   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:02:56.114460   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:56.114406   77569 retry.go:31] will retry after 300.566239ms: waiting for machine to come up
	I0927 02:02:56.416897   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:56.417420   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:02:56.417445   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:56.417393   77569 retry.go:31] will retry after 305.815495ms: waiting for machine to come up
	I0927 02:02:56.724825   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:56.725306   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:02:56.725335   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:56.725255   77569 retry.go:31] will retry after 391.003834ms: waiting for machine to come up
	I0927 02:02:57.117289   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:57.117746   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:02:57.117770   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:57.117709   77569 retry.go:31] will retry after 557.665658ms: waiting for machine to come up
	I0927 02:02:57.677499   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:57.677925   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:02:57.677949   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:57.677888   77569 retry.go:31] will retry after 478.856067ms: waiting for machine to come up
	I0927 02:02:58.158523   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:58.158985   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:02:58.159029   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:58.158954   77569 retry.go:31] will retry after 660.159475ms: waiting for machine to come up
	I0927 02:02:58.820323   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:58.820882   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:02:58.820907   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:58.820836   77569 retry.go:31] will retry after 878.807736ms: waiting for machine to come up
	I0927 02:02:59.701515   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:02:59.702055   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:02:59.702096   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:02:59.702006   77569 retry.go:31] will retry after 1.096661181s: waiting for machine to come up
	I0927 02:03:00.799754   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:00.800316   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:03:00.800346   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:03:00.800266   77569 retry.go:31] will retry after 1.465726332s: waiting for machine to come up
	I0927 02:03:02.267931   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:02.268435   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:03:02.268461   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:03:02.268380   77569 retry.go:31] will retry after 1.812377376s: waiting for machine to come up
	I0927 02:03:04.083396   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:04.083861   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:03:04.083888   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:03:04.083835   77569 retry.go:31] will retry after 2.1800619s: waiting for machine to come up
	I0927 02:03:06.266156   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:06.266648   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:03:06.266666   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:03:06.266615   77569 retry.go:31] will retry after 3.533213926s: waiting for machine to come up
	I0927 02:03:09.801200   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:09.801658   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:03:09.801686   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:03:09.801607   77569 retry.go:31] will retry after 3.491191772s: waiting for machine to come up
	I0927 02:03:13.296214   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:13.296596   77529 main.go:141] libmachine: (kindnet-782846) DBG | unable to find current IP address of domain kindnet-782846 in network mk-kindnet-782846
	I0927 02:03:13.296618   77529 main.go:141] libmachine: (kindnet-782846) DBG | I0927 02:03:13.296554   77569 retry.go:31] will retry after 5.668616798s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.418729324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402597418703824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48798a48-e800-457a-81f2-163747c574aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.419465112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf9e9b75-939e-473d-9fe2-d61828eab00f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.419519359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf9e9b75-939e-473d-9fe2-d61828eab00f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.419716139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484,PodSandboxId:c7c1cbc5465bde39a0e13976fff50b102221c9e421bc2a7d170b15ceb86d5a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401555904498416,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c48d125-370c-44a1-9ede-536881b40d57,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7,PodSandboxId:e011a1aa370b8ae5fc35367eb3d4d947d070a20bb903edda1609fa74e0eb4c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555171115945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t4mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f9faa4-be80-40bf-9080-363fcbf3f084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81,PodSandboxId:7729e650d0540b2d4b96d124755e986a31f237f764377fdcb746d60d4e8a7044,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555000829817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zp5f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
829b4a4-1686-4f22-8368-65e3897604b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164,PodSandboxId:f1895faff7bcd580df61623c23992b57646fef48d3f42d6903a7b92cec910e3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727401553830730903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 768ae3f5-2ebd-4db7-aa36-81c4f033d685,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031,PodSandboxId:516a881b4869ebd8057279ab3fa16696c248c545290ab82b3a17ac04ff25b036,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401543255398383,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6a56e9e78ce33082942c2c1324708a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90,PodSandboxId:c4155021ba88d90e786ef042b2b1a165c27679f97053e165a756962af193e463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401543262017623,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8,PodSandboxId:4b00ff761c9ab853fe8d78a46d260f175c66a3d3762a0862958bd74a86c99336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401543253941935,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1e3ed7727ff9ba05d6eacb60c9f5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b,PodSandboxId:c89eb9ddb497f61bc8fc4315545b5c1409a54a7f104f0b3533a7e449f34f4bc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401543226505209,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566665b3d67646253c5c4233f0432cee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d,PodSandboxId:37281392c0f0c6b827059cc365c4a21d5287ae69c95a895ecf4e043d61e23dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401261835523688,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf9e9b75-939e-473d-9fe2-d61828eab00f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.463923662Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9f88601-01ca-4c99-99b1-1cea9497e78a name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.464055074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9f88601-01ca-4c99-99b1-1cea9497e78a name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.465473998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2b4cb7c-0edc-4f57-ba94-26dcfc1b28a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.466125682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402597466092814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2b4cb7c-0edc-4f57-ba94-26dcfc1b28a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.468211091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4eef6f7-2e08-4b10-83df-f768f7cba437 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.468310624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4eef6f7-2e08-4b10-83df-f768f7cba437 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.468705914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484,PodSandboxId:c7c1cbc5465bde39a0e13976fff50b102221c9e421bc2a7d170b15ceb86d5a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401555904498416,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c48d125-370c-44a1-9ede-536881b40d57,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7,PodSandboxId:e011a1aa370b8ae5fc35367eb3d4d947d070a20bb903edda1609fa74e0eb4c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555171115945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t4mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f9faa4-be80-40bf-9080-363fcbf3f084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81,PodSandboxId:7729e650d0540b2d4b96d124755e986a31f237f764377fdcb746d60d4e8a7044,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555000829817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zp5f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
829b4a4-1686-4f22-8368-65e3897604b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164,PodSandboxId:f1895faff7bcd580df61623c23992b57646fef48d3f42d6903a7b92cec910e3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727401553830730903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 768ae3f5-2ebd-4db7-aa36-81c4f033d685,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031,PodSandboxId:516a881b4869ebd8057279ab3fa16696c248c545290ab82b3a17ac04ff25b036,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401543255398383,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6a56e9e78ce33082942c2c1324708a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90,PodSandboxId:c4155021ba88d90e786ef042b2b1a165c27679f97053e165a756962af193e463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401543262017623,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8,PodSandboxId:4b00ff761c9ab853fe8d78a46d260f175c66a3d3762a0862958bd74a86c99336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401543253941935,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1e3ed7727ff9ba05d6eacb60c9f5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b,PodSandboxId:c89eb9ddb497f61bc8fc4315545b5c1409a54a7f104f0b3533a7e449f34f4bc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401543226505209,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566665b3d67646253c5c4233f0432cee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d,PodSandboxId:37281392c0f0c6b827059cc365c4a21d5287ae69c95a895ecf4e043d61e23dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401261835523688,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4eef6f7-2e08-4b10-83df-f768f7cba437 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.507567657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e566d9b-19f7-45c0-905b-b9ceda7a5fcc name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.507640830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e566d9b-19f7-45c0-905b-b9ceda7a5fcc name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.509456331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1622e35-1fe4-4ef3-89e4-bdc840c20e01 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.509878802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402597509854302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1622e35-1fe4-4ef3-89e4-bdc840c20e01 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.510398815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fab697e0-9662-414c-b948-a49487ee203a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.510469307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fab697e0-9662-414c-b948-a49487ee203a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.510688650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484,PodSandboxId:c7c1cbc5465bde39a0e13976fff50b102221c9e421bc2a7d170b15ceb86d5a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401555904498416,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c48d125-370c-44a1-9ede-536881b40d57,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7,PodSandboxId:e011a1aa370b8ae5fc35367eb3d4d947d070a20bb903edda1609fa74e0eb4c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555171115945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t4mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f9faa4-be80-40bf-9080-363fcbf3f084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81,PodSandboxId:7729e650d0540b2d4b96d124755e986a31f237f764377fdcb746d60d4e8a7044,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555000829817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zp5f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
829b4a4-1686-4f22-8368-65e3897604b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164,PodSandboxId:f1895faff7bcd580df61623c23992b57646fef48d3f42d6903a7b92cec910e3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727401553830730903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 768ae3f5-2ebd-4db7-aa36-81c4f033d685,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031,PodSandboxId:516a881b4869ebd8057279ab3fa16696c248c545290ab82b3a17ac04ff25b036,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401543255398383,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6a56e9e78ce33082942c2c1324708a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90,PodSandboxId:c4155021ba88d90e786ef042b2b1a165c27679f97053e165a756962af193e463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401543262017623,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8,PodSandboxId:4b00ff761c9ab853fe8d78a46d260f175c66a3d3762a0862958bd74a86c99336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401543253941935,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1e3ed7727ff9ba05d6eacb60c9f5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b,PodSandboxId:c89eb9ddb497f61bc8fc4315545b5c1409a54a7f104f0b3533a7e449f34f4bc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401543226505209,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566665b3d67646253c5c4233f0432cee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d,PodSandboxId:37281392c0f0c6b827059cc365c4a21d5287ae69c95a895ecf4e043d61e23dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401261835523688,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fab697e0-9662-414c-b948-a49487ee203a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.547509128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9efadbf-8277-46e4-bfd1-8f95eeba393b name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.547612973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9efadbf-8277-46e4-bfd1-8f95eeba393b name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.549188223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bd0e8d4-7e7a-49ab-9f12-58e44743b099 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.549928302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402597549893792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bd0e8d4-7e7a-49ab-9f12-58e44743b099 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.550921736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3a42c20-e53c-47f7-890b-d21a04f18590 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.551014428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3a42c20-e53c-47f7-890b-d21a04f18590 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:17 embed-certs-245911 crio[720]: time="2024-09-27 02:03:17.551385141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484,PodSandboxId:c7c1cbc5465bde39a0e13976fff50b102221c9e421bc2a7d170b15ceb86d5a24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401555904498416,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c48d125-370c-44a1-9ede-536881b40d57,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7,PodSandboxId:e011a1aa370b8ae5fc35367eb3d4d947d070a20bb903edda1609fa74e0eb4c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555171115945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t4mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3f9faa4-be80-40bf-9080-363fcbf3f084,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81,PodSandboxId:7729e650d0540b2d4b96d124755e986a31f237f764377fdcb746d60d4e8a7044,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401555000829817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zp5f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
829b4a4-1686-4f22-8368-65e3897604b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164,PodSandboxId:f1895faff7bcd580df61623c23992b57646fef48d3f42d6903a7b92cec910e3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727401553830730903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 768ae3f5-2ebd-4db7-aa36-81c4f033d685,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031,PodSandboxId:516a881b4869ebd8057279ab3fa16696c248c545290ab82b3a17ac04ff25b036,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401543255398383,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6a56e9e78ce33082942c2c1324708a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90,PodSandboxId:c4155021ba88d90e786ef042b2b1a165c27679f97053e165a756962af193e463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401543262017623,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8,PodSandboxId:4b00ff761c9ab853fe8d78a46d260f175c66a3d3762a0862958bd74a86c99336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401543253941935,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1e3ed7727ff9ba05d6eacb60c9f5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b,PodSandboxId:c89eb9ddb497f61bc8fc4315545b5c1409a54a7f104f0b3533a7e449f34f4bc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401543226505209,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566665b3d67646253c5c4233f0432cee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d,PodSandboxId:37281392c0f0c6b827059cc365c4a21d5287ae69c95a895ecf4e043d61e23dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401261835523688,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-245911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796dc376a570d5cfc3042ada17f81999,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3a42c20-e53c-47f7-890b-d21a04f18590 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef3e7f4404a3b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   c7c1cbc5465bd       storage-provisioner
	448ea5668bbfd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   e011a1aa370b8       coredns-7c65d6cfc9-t4mxw
	86de3893b7ac9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   7729e650d0540       coredns-7c65d6cfc9-zp5f2
	fb008875dc5bc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 minutes ago      Running             kube-proxy                0                   f1895faff7bcd       kube-proxy-5l299
	eac68ab94f64b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 minutes ago      Running             kube-apiserver            2                   c4155021ba88d       kube-apiserver-embed-certs-245911
	b98cc8aef9ef9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   516a881b4869e       etcd-embed-certs-245911
	a167cf0b875d4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 minutes ago      Running             kube-controller-manager   2                   4b00ff761c9ab       kube-controller-manager-embed-certs-245911
	fd11dd4f21927       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   17 minutes ago      Running             kube-scheduler            2                   c89eb9ddb497f       kube-scheduler-embed-certs-245911
	616b6473dde77       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 minutes ago      Exited              kube-apiserver            1                   37281392c0f0c       kube-apiserver-embed-certs-245911
	
	
	==> coredns [448ea5668bbfde4c622ff366c9a4d879ed6fd522a860d8af0b9a0b81d0684ad7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [86de3893b7ac9b71b09711ecfeea13b6d675dca6289b919969de5863d2baaa81] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-245911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-245911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=embed-certs-245911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_45_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:45:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-245911
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 02:03:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 02:01:18 +0000   Fri, 27 Sep 2024 01:45:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 02:01:18 +0000   Fri, 27 Sep 2024 01:45:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 02:01:18 +0000   Fri, 27 Sep 2024 01:45:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 02:01:18 +0000   Fri, 27 Sep 2024 01:45:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    embed-certs-245911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7110e728e2604f3689de21f5a2c2cd24
	  System UUID:                7110e728-e260-4f36-89de-21f5a2c2cd24
	  Boot ID:                    f8d88b27-0ecd-4578-9907-8f602caafdb0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-t4mxw                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-zp5f2                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-embed-certs-245911                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-embed-certs-245911             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-embed-certs-245911    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-5l299                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-embed-certs-245911             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-k28wz               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node embed-certs-245911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node embed-certs-245911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node embed-certs-245911 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node embed-certs-245911 event: Registered Node embed-certs-245911 in Controller
	
	
	==> dmesg <==
	[  +0.040104] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779433] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.425135] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.575703] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.513015] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.062208] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065732] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.182922] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.135628] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.297034] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.108190] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +1.836552] systemd-fstab-generator[924]: Ignoring "noauto" option for root device
	[  +0.059844] kauditd_printk_skb: 158 callbacks suppressed
	[Sep27 01:41] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.112314] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.141705] kauditd_printk_skb: 30 callbacks suppressed
	[Sep27 01:45] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.864571] systemd-fstab-generator[2561]: Ignoring "noauto" option for root device
	[  +4.546286] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.022854] systemd-fstab-generator[2884]: Ignoring "noauto" option for root device
	[  +5.109759] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.416966] systemd-fstab-generator[3069]: Ignoring "noauto" option for root device
	[Sep27 01:46] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [b98cc8aef9ef9d1c5f966eaf9e96f7ac4dc44aeec30da310fc896f89109af031] <==
	{"level":"info","ts":"2024-09-27T01:45:44.005506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:45:44.005623Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:45:44.005711Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:55:44.590208Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-09-27T01:55:44.600437Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":683,"took":"9.788663ms","hash":3631255844,"current-db-size-bytes":2301952,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2301952,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-27T01:55:44.600501Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3631255844,"revision":683,"compact-revision":-1}
	{"level":"info","ts":"2024-09-27T02:00:44.600048Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":926}
	{"level":"info","ts":"2024-09-27T02:00:44.604641Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":926,"took":"3.759888ms","hash":1879656114,"current-db-size-bytes":2301952,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-27T02:00:44.604736Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1879656114,"revision":926,"compact-revision":683}
	{"level":"info","ts":"2024-09-27T02:01:41.163469Z","caller":"traceutil/trace.go:171","msg":"trace[1345804640] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"109.599678ms","start":"2024-09-27T02:01:41.053840Z","end":"2024-09-27T02:01:41.163439Z","steps":["trace[1345804640] 'process raft request'  (duration: 109.156786ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:01:41.439629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.929902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-27T02:01:41.439739Z","caller":"traceutil/trace.go:171","msg":"trace[1052942207] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1218; }","duration":"122.129308ms","start":"2024-09-27T02:01:41.317592Z","end":"2024-09-27T02:01:41.439721Z","steps":["trace[1052942207] 'count revisions from in-memory index tree'  (duration: 121.865805ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:02:18.159827Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.590802ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17619649856975409679 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.158\" mod_revision:1239 > success:<request_put:<key:\"/registry/masterleases/192.168.39.158\" value_size:67 lease:8396277820120633868 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.158\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-27T02:02:18.160137Z","caller":"traceutil/trace.go:171","msg":"trace[680011429] linearizableReadLoop","detail":"{readStateIndex:1465; appliedIndex:1464; }","duration":"168.540293ms","start":"2024-09-27T02:02:17.991577Z","end":"2024-09-27T02:02:18.160118Z","steps":["trace[680011429] 'read index received'  (duration: 34.850339ms)","trace[680011429] 'applied index is now lower than readState.Index'  (duration: 133.688562ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T02:02:18.160728Z","caller":"traceutil/trace.go:171","msg":"trace[579584472] transaction","detail":"{read_only:false; response_revision:1248; number_of_response:1; }","duration":"263.084649ms","start":"2024-09-27T02:02:17.897613Z","end":"2024-09-27T02:02:18.160698Z","steps":["trace[579584472] 'process raft request'  (duration: 128.82451ms)","trace[579584472] 'compare'  (duration: 132.444521ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T02:02:18.160911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.318178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:02:18.160985Z","caller":"traceutil/trace.go:171","msg":"trace[768590302] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1248; }","duration":"169.394056ms","start":"2024-09-27T02:02:17.991573Z","end":"2024-09-27T02:02:18.160967Z","steps":["trace[768590302] 'agreement among raft nodes before linearized reading'  (duration: 169.181837ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T02:02:40.628999Z","caller":"traceutil/trace.go:171","msg":"trace[77933869] transaction","detail":"{read_only:false; response_revision:1267; number_of_response:1; }","duration":"523.917725ms","start":"2024-09-27T02:02:40.105062Z","end":"2024-09-27T02:02:40.628980Z","steps":["trace[77933869] 'process raft request'  (duration: 523.580346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:02:40.629281Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T02:02:40.105035Z","time spent":"524.170845ms","remote":"127.0.0.1:58802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":560,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-245911\" mod_revision:1259 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-245911\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-245911\" > >"}
	{"level":"info","ts":"2024-09-27T02:02:41.669619Z","caller":"traceutil/trace.go:171","msg":"trace[659669377] linearizableReadLoop","detail":"{readStateIndex:1489; appliedIndex:1488; }","duration":"101.017748ms","start":"2024-09-27T02:02:41.568578Z","end":"2024-09-27T02:02:41.669596Z","steps":["trace[659669377] 'read index received'  (duration: 100.804488ms)","trace[659669377] 'applied index is now lower than readState.Index'  (duration: 212.668µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T02:02:41.669750Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.146044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:02:41.669780Z","caller":"traceutil/trace.go:171","msg":"trace[780121794] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1268; }","duration":"101.196751ms","start":"2024-09-27T02:02:41.568575Z","end":"2024-09-27T02:02:41.669771Z","steps":["trace[780121794] 'agreement among raft nodes before linearized reading'  (duration: 101.124567ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T02:02:41.670027Z","caller":"traceutil/trace.go:171","msg":"trace[103430752] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"113.762684ms","start":"2024-09-27T02:02:41.556250Z","end":"2024-09-27T02:02:41.670013Z","steps":["trace[103430752] 'process raft request'  (duration: 113.202763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:02:42.011876Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.878925ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:02:42.011953Z","caller":"traceutil/trace.go:171","msg":"trace[1740197066] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1268; }","duration":"134.015043ms","start":"2024-09-27T02:02:41.877912Z","end":"2024-09-27T02:02:42.011927Z","steps":["trace[1740197066] 'range keys from in-memory index tree'  (duration: 133.856566ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:03:17 up 22 min,  0 users,  load average: 0.52, 0.25, 0.15
	Linux embed-certs-245911 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [616b6473dde776c7a9297e486ba7905ee3e75b966feb8ba98ca7279d8d74b53d] <==
	W0927 01:45:37.967836       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:37.967836       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.010150       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.011671       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.012956       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.137677       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.165869       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.175641       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.218837       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.220108       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.221671       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.256106       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.302213       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:38.364769       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.414078       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.609656       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.658270       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.725974       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.788893       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.841199       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:40.983276       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:41.029570       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:41.047466       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:41.128567       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:45:41.164496       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [eac68ab94f64bf4f7a61126f5c8ce7bc91c26bbe5cafd0b4a840af679634ef90] <==
	I0927 01:58:47.110915       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:58:47.112015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 02:00:46.107913       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:00:46.108264       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0927 02:00:47.109790       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:00:47.109934       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 02:00:47.109840       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:00:47.110071       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 02:00:47.111152       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 02:00:47.111184       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 02:01:47.111536       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:01:47.111799       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 02:01:47.111562       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:01:47.111992       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 02:01:47.113098       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 02:01:47.113173       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a167cf0b875d40ea961bba6c611a013cee45acf40487664e093f1547e58157c8] <==
	E0927 01:57:53.202562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:57:53.655822       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:58:23.210564       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:58:23.664879       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:58:53.217224       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:58:53.675367       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:59:23.222998       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:59:23.683745       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:59:53.229527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:59:53.690856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:00:23.236403       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:00:23.698592       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:00:53.242972       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:00:53.706073       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 02:01:18.534188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-245911"
	E0927 02:01:23.249872       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:01:23.714682       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:01:53.257786       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:01:53.724749       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 02:02:13.984959       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="311.287µs"
	E0927 02:02:23.266161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:02:23.732730       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 02:02:25.977541       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="121.418µs"
	E0927 02:02:53.273770       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:02:53.742840       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fb008875dc5bc9086f76dae6a7603b058093c268af6fcf781aa93354d58a1164] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:45:54.310256       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:45:54.325397       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0927 01:45:54.325689       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:45:54.435731       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:45:54.435769       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:45:54.435792       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:45:54.439450       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:45:54.439811       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:45:54.439844       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:45:54.443057       1 config.go:199] "Starting service config controller"
	I0927 01:45:54.443114       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:45:54.443151       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:45:54.443155       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:45:54.444729       1 config.go:328] "Starting node config controller"
	I0927 01:45:54.444764       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:45:54.543579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:45:54.543667       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:45:54.545410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fd11dd4f21927105875d289cc53f834657d3675d809f31b5575db66681be1a7b] <==
	W0927 01:45:47.077962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 01:45:47.078049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.081310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 01:45:47.081386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.108256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:45:47.108306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.127942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:45:47.128100       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.145761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 01:45:47.146317       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.166809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:45:47.166905       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.222598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:45:47.222691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.342037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 01:45:47.342301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.410640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 01:45:47.411116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.438059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 01:45:47.438125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.446396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 01:45:47.446450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:45:47.533232       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:45:47.533688       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0927 01:45:50.220173       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 02:02:09 embed-certs-245911 kubelet[2891]: E0927 02:02:09.265450    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402529265040874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:09 embed-certs-245911 kubelet[2891]: E0927 02:02:09.265475    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402529265040874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:13 embed-certs-245911 kubelet[2891]: E0927 02:02:13.962302    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 02:02:19 embed-certs-245911 kubelet[2891]: E0927 02:02:19.268613    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402539267966706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:19 embed-certs-245911 kubelet[2891]: E0927 02:02:19.268671    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402539267966706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:25 embed-certs-245911 kubelet[2891]: E0927 02:02:25.960772    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 02:02:29 embed-certs-245911 kubelet[2891]: E0927 02:02:29.270437    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402549269816536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:29 embed-certs-245911 kubelet[2891]: E0927 02:02:29.270838    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402549269816536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:39 embed-certs-245911 kubelet[2891]: E0927 02:02:39.272494    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402559271994936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:39 embed-certs-245911 kubelet[2891]: E0927 02:02:39.272542    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402559271994936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:39 embed-certs-245911 kubelet[2891]: E0927 02:02:39.961000    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 02:02:48 embed-certs-245911 kubelet[2891]: E0927 02:02:48.988560    2891 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 02:02:48 embed-certs-245911 kubelet[2891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 02:02:48 embed-certs-245911 kubelet[2891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 02:02:48 embed-certs-245911 kubelet[2891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 02:02:48 embed-certs-245911 kubelet[2891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 02:02:49 embed-certs-245911 kubelet[2891]: E0927 02:02:49.274061    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402569273671364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:49 embed-certs-245911 kubelet[2891]: E0927 02:02:49.274099    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402569273671364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:53 embed-certs-245911 kubelet[2891]: E0927 02:02:53.961043    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 02:02:59 embed-certs-245911 kubelet[2891]: E0927 02:02:59.277777    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402579277236613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:59 embed-certs-245911 kubelet[2891]: E0927 02:02:59.277833    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402579277236613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:04 embed-certs-245911 kubelet[2891]: E0927 02:03:04.962616    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	Sep 27 02:03:09 embed-certs-245911 kubelet[2891]: E0927 02:03:09.279401    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402589279059308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:09 embed-certs-245911 kubelet[2891]: E0927 02:03:09.279782    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402589279059308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:16 embed-certs-245911 kubelet[2891]: E0927 02:03:16.961970    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-k28wz" podUID="1d369542-c088-4099-aa6f-9d3158f78f25"
	
	
	==> storage-provisioner [ef3e7f4404a3bb5acd2a338f08e2d0ed91b3e70f1c11bfe6552bce8d73f93484] <==
	I0927 01:45:56.003382       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:45:56.016609       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:45:56.016684       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:45:56.091633       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:45:56.093756       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-245911_c80d10c5-20f6-40f5-bd39-048655b6a15e!
	I0927 01:45:56.103058       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30015e7b-faab-4daf-b5dd-99a7fbb5b2f6", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-245911_c80d10c5-20f6-40f5-bd39-048655b6a15e became leader
	I0927 01:45:56.201836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-245911_c80d10c5-20f6-40f5-bd39-048655b6a15e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-245911 -n embed-certs-245911
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-245911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-k28wz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-245911 describe pod metrics-server-6867b74b74-k28wz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-245911 describe pod metrics-server-6867b74b74-k28wz: exit status 1 (61.987679ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-k28wz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-245911 describe pod metrics-server-6867b74b74-k28wz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (488.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (354.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521072 -n no-preload-521072
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-27 02:01:39.188565786 +0000 UTC m=+6415.374174038
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-521072 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-521072 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.879µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-521072 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-521072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-521072 logs -n 25: (3.72204172s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:33 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-521072             | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-595331                              | cert-expiration-595331       | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| delete  | -p                                                     | disable-driver-mounts-630210 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | disable-driver-mounts-630210                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:35 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-245911            | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-368295  | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-521072                  | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-612261        | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-245911                 | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-612261             | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-368295       | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 02:01 UTC | 27 Sep 24 02:01 UTC |
	| start   | -p newest-cni-223910 --memory=2200 --alsologtostderr   | newest-cni-223910            | jenkins | v1.34.0 | 27 Sep 24 02:01 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 02:01:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 02:01:08.189774   75731 out.go:345] Setting OutFile to fd 1 ...
	I0927 02:01:08.189893   75731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 02:01:08.189903   75731 out.go:358] Setting ErrFile to fd 2...
	I0927 02:01:08.189907   75731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 02:01:08.190082   75731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 02:01:08.190680   75731 out.go:352] Setting JSON to false
	I0927 02:01:08.191639   75731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9813,"bootTime":1727392655,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 02:01:08.191727   75731 start.go:139] virtualization: kvm guest
	I0927 02:01:08.194215   75731 out.go:177] * [newest-cni-223910] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 02:01:08.195849   75731 notify.go:220] Checking for updates...
	I0927 02:01:08.195888   75731 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 02:01:08.197464   75731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 02:01:08.198767   75731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 02:01:08.199969   75731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:01:08.201237   75731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 02:01:08.202343   75731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 02:01:08.204048   75731 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:01:08.204174   75731 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:01:08.204259   75731 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:01:08.204363   75731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 02:01:08.242253   75731 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 02:01:08.243555   75731 start.go:297] selected driver: kvm2
	I0927 02:01:08.243569   75731 start.go:901] validating driver "kvm2" against <nil>
	I0927 02:01:08.243580   75731 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 02:01:08.244285   75731 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 02:01:08.244365   75731 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 02:01:08.260611   75731 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 02:01:08.260671   75731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0927 02:01:08.260742   75731 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0927 02:01:08.261015   75731 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0927 02:01:08.261060   75731 cni.go:84] Creating CNI manager for ""
	I0927 02:01:08.261124   75731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 02:01:08.261137   75731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 02:01:08.261206   75731 start.go:340] cluster config:
	{Name:newest-cni-223910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-223910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 02:01:08.261306   75731 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 02:01:08.263459   75731 out.go:177] * Starting "newest-cni-223910" primary control-plane node in "newest-cni-223910" cluster
	I0927 02:01:08.264724   75731 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 02:01:08.264766   75731 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 02:01:08.264780   75731 cache.go:56] Caching tarball of preloaded images
	I0927 02:01:08.264863   75731 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 02:01:08.264877   75731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 02:01:08.264982   75731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/newest-cni-223910/config.json ...
	I0927 02:01:08.265006   75731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/newest-cni-223910/config.json: {Name:mk6a84757182501441e5972119380ea36e09728a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:01:08.265187   75731 start.go:360] acquireMachinesLock for newest-cni-223910: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 02:01:08.265219   75731 start.go:364] duration metric: took 18.312µs to acquireMachinesLock for "newest-cni-223910"
	I0927 02:01:08.265242   75731 start.go:93] Provisioning new machine with config: &{Name:newest-cni-223910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-223910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 02:01:08.265325   75731 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 02:01:08.266993   75731 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 02:01:08.267130   75731 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 02:01:08.267177   75731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 02:01:08.282579   75731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0927 02:01:08.283037   75731 main.go:141] libmachine: () Calling .GetVersion
	I0927 02:01:08.283560   75731 main.go:141] libmachine: Using API Version  1
	I0927 02:01:08.283579   75731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 02:01:08.283976   75731 main.go:141] libmachine: () Calling .GetMachineName
	I0927 02:01:08.284201   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetMachineName
	I0927 02:01:08.284377   75731 main.go:141] libmachine: (newest-cni-223910) Calling .DriverName
	I0927 02:01:08.284560   75731 start.go:159] libmachine.API.Create for "newest-cni-223910" (driver="kvm2")
	I0927 02:01:08.284589   75731 client.go:168] LocalClient.Create starting
	I0927 02:01:08.284616   75731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 02:01:08.284645   75731 main.go:141] libmachine: Decoding PEM data...
	I0927 02:01:08.284658   75731 main.go:141] libmachine: Parsing certificate...
	I0927 02:01:08.284705   75731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 02:01:08.284723   75731 main.go:141] libmachine: Decoding PEM data...
	I0927 02:01:08.284736   75731 main.go:141] libmachine: Parsing certificate...
	I0927 02:01:08.284751   75731 main.go:141] libmachine: Running pre-create checks...
	I0927 02:01:08.284759   75731 main.go:141] libmachine: (newest-cni-223910) Calling .PreCreateCheck
	I0927 02:01:08.285127   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetConfigRaw
	I0927 02:01:08.285583   75731 main.go:141] libmachine: Creating machine...
	I0927 02:01:08.285600   75731 main.go:141] libmachine: (newest-cni-223910) Calling .Create
	I0927 02:01:08.285736   75731 main.go:141] libmachine: (newest-cni-223910) Creating KVM machine...
	I0927 02:01:08.286832   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found existing default KVM network
	I0927 02:01:08.288115   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:08.287949   75754 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:66:3a:58} reservation:<nil>}
	I0927 02:01:08.288911   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:08.288843   75754 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:04:9a:9d} reservation:<nil>}
	I0927 02:01:08.289699   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:08.289634   75754 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b0:e1:14} reservation:<nil>}
	I0927 02:01:08.290697   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:08.290639   75754 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b8a0}
	I0927 02:01:08.290744   75731 main.go:141] libmachine: (newest-cni-223910) DBG | created network xml: 
	I0927 02:01:08.290763   75731 main.go:141] libmachine: (newest-cni-223910) DBG | <network>
	I0927 02:01:08.290771   75731 main.go:141] libmachine: (newest-cni-223910) DBG |   <name>mk-newest-cni-223910</name>
	I0927 02:01:08.290778   75731 main.go:141] libmachine: (newest-cni-223910) DBG |   <dns enable='no'/>
	I0927 02:01:08.290784   75731 main.go:141] libmachine: (newest-cni-223910) DBG |   
	I0927 02:01:08.290806   75731 main.go:141] libmachine: (newest-cni-223910) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0927 02:01:08.290817   75731 main.go:141] libmachine: (newest-cni-223910) DBG |     <dhcp>
	I0927 02:01:08.290822   75731 main.go:141] libmachine: (newest-cni-223910) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0927 02:01:08.290831   75731 main.go:141] libmachine: (newest-cni-223910) DBG |     </dhcp>
	I0927 02:01:08.290835   75731 main.go:141] libmachine: (newest-cni-223910) DBG |   </ip>
	I0927 02:01:08.290839   75731 main.go:141] libmachine: (newest-cni-223910) DBG |   
	I0927 02:01:08.290847   75731 main.go:141] libmachine: (newest-cni-223910) DBG | </network>
	I0927 02:01:08.290853   75731 main.go:141] libmachine: (newest-cni-223910) DBG | 
	I0927 02:01:08.296375   75731 main.go:141] libmachine: (newest-cni-223910) DBG | trying to create private KVM network mk-newest-cni-223910 192.168.72.0/24...
	I0927 02:01:08.363747   75731 main.go:141] libmachine: (newest-cni-223910) DBG | private KVM network mk-newest-cni-223910 192.168.72.0/24 created
	I0927 02:01:08.363833   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:08.363726   75754 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:01:08.364024   75731 main.go:141] libmachine: (newest-cni-223910) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910 ...
	I0927 02:01:08.364043   75731 main.go:141] libmachine: (newest-cni-223910) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 02:01:08.364053   75731 main.go:141] libmachine: (newest-cni-223910) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 02:01:08.612037   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:08.611916   75754 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/id_rsa...
	I0927 02:01:08.896325   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:08.896212   75754 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/newest-cni-223910.rawdisk...
	I0927 02:01:08.896352   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Writing magic tar header
	I0927 02:01:08.896368   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Writing SSH key tar header
	I0927 02:01:08.896470   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:08.896413   75754 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910 ...
	I0927 02:01:08.896536   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910
	I0927 02:01:08.896600   75731 main.go:141] libmachine: (newest-cni-223910) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910 (perms=drwx------)
	I0927 02:01:08.896624   75731 main.go:141] libmachine: (newest-cni-223910) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 02:01:08.896646   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 02:01:08.896661   75731 main.go:141] libmachine: (newest-cni-223910) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 02:01:08.896675   75731 main.go:141] libmachine: (newest-cni-223910) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 02:01:08.896687   75731 main.go:141] libmachine: (newest-cni-223910) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 02:01:08.896698   75731 main.go:141] libmachine: (newest-cni-223910) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 02:01:08.896720   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:01:08.896733   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 02:01:08.896742   75731 main.go:141] libmachine: (newest-cni-223910) Creating domain...
	I0927 02:01:08.896758   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 02:01:08.896768   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Checking permissions on dir: /home/jenkins
	I0927 02:01:08.896785   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Checking permissions on dir: /home
	I0927 02:01:08.896795   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Skipping /home - not owner
	I0927 02:01:08.897890   75731 main.go:141] libmachine: (newest-cni-223910) define libvirt domain using xml: 
	I0927 02:01:08.897909   75731 main.go:141] libmachine: (newest-cni-223910) <domain type='kvm'>
	I0927 02:01:08.897916   75731 main.go:141] libmachine: (newest-cni-223910)   <name>newest-cni-223910</name>
	I0927 02:01:08.897922   75731 main.go:141] libmachine: (newest-cni-223910)   <memory unit='MiB'>2200</memory>
	I0927 02:01:08.897927   75731 main.go:141] libmachine: (newest-cni-223910)   <vcpu>2</vcpu>
	I0927 02:01:08.897932   75731 main.go:141] libmachine: (newest-cni-223910)   <features>
	I0927 02:01:08.897944   75731 main.go:141] libmachine: (newest-cni-223910)     <acpi/>
	I0927 02:01:08.897951   75731 main.go:141] libmachine: (newest-cni-223910)     <apic/>
	I0927 02:01:08.897956   75731 main.go:141] libmachine: (newest-cni-223910)     <pae/>
	I0927 02:01:08.897961   75731 main.go:141] libmachine: (newest-cni-223910)     
	I0927 02:01:08.897966   75731 main.go:141] libmachine: (newest-cni-223910)   </features>
	I0927 02:01:08.897973   75731 main.go:141] libmachine: (newest-cni-223910)   <cpu mode='host-passthrough'>
	I0927 02:01:08.897978   75731 main.go:141] libmachine: (newest-cni-223910)   
	I0927 02:01:08.897987   75731 main.go:141] libmachine: (newest-cni-223910)   </cpu>
	I0927 02:01:08.897992   75731 main.go:141] libmachine: (newest-cni-223910)   <os>
	I0927 02:01:08.898002   75731 main.go:141] libmachine: (newest-cni-223910)     <type>hvm</type>
	I0927 02:01:08.898007   75731 main.go:141] libmachine: (newest-cni-223910)     <boot dev='cdrom'/>
	I0927 02:01:08.898017   75731 main.go:141] libmachine: (newest-cni-223910)     <boot dev='hd'/>
	I0927 02:01:08.898023   75731 main.go:141] libmachine: (newest-cni-223910)     <bootmenu enable='no'/>
	I0927 02:01:08.898027   75731 main.go:141] libmachine: (newest-cni-223910)   </os>
	I0927 02:01:08.898032   75731 main.go:141] libmachine: (newest-cni-223910)   <devices>
	I0927 02:01:08.898039   75731 main.go:141] libmachine: (newest-cni-223910)     <disk type='file' device='cdrom'>
	I0927 02:01:08.898052   75731 main.go:141] libmachine: (newest-cni-223910)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/boot2docker.iso'/>
	I0927 02:01:08.898070   75731 main.go:141] libmachine: (newest-cni-223910)       <target dev='hdc' bus='scsi'/>
	I0927 02:01:08.898075   75731 main.go:141] libmachine: (newest-cni-223910)       <readonly/>
	I0927 02:01:08.898079   75731 main.go:141] libmachine: (newest-cni-223910)     </disk>
	I0927 02:01:08.898085   75731 main.go:141] libmachine: (newest-cni-223910)     <disk type='file' device='disk'>
	I0927 02:01:08.898093   75731 main.go:141] libmachine: (newest-cni-223910)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 02:01:08.898187   75731 main.go:141] libmachine: (newest-cni-223910)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/newest-cni-223910.rawdisk'/>
	I0927 02:01:08.898248   75731 main.go:141] libmachine: (newest-cni-223910)       <target dev='hda' bus='virtio'/>
	I0927 02:01:08.898279   75731 main.go:141] libmachine: (newest-cni-223910)     </disk>
	I0927 02:01:08.898301   75731 main.go:141] libmachine: (newest-cni-223910)     <interface type='network'>
	I0927 02:01:08.898312   75731 main.go:141] libmachine: (newest-cni-223910)       <source network='mk-newest-cni-223910'/>
	I0927 02:01:08.898321   75731 main.go:141] libmachine: (newest-cni-223910)       <model type='virtio'/>
	I0927 02:01:08.898333   75731 main.go:141] libmachine: (newest-cni-223910)     </interface>
	I0927 02:01:08.898343   75731 main.go:141] libmachine: (newest-cni-223910)     <interface type='network'>
	I0927 02:01:08.898353   75731 main.go:141] libmachine: (newest-cni-223910)       <source network='default'/>
	I0927 02:01:08.898363   75731 main.go:141] libmachine: (newest-cni-223910)       <model type='virtio'/>
	I0927 02:01:08.898377   75731 main.go:141] libmachine: (newest-cni-223910)     </interface>
	I0927 02:01:08.898386   75731 main.go:141] libmachine: (newest-cni-223910)     <serial type='pty'>
	I0927 02:01:08.898398   75731 main.go:141] libmachine: (newest-cni-223910)       <target port='0'/>
	I0927 02:01:08.898404   75731 main.go:141] libmachine: (newest-cni-223910)     </serial>
	I0927 02:01:08.898414   75731 main.go:141] libmachine: (newest-cni-223910)     <console type='pty'>
	I0927 02:01:08.898421   75731 main.go:141] libmachine: (newest-cni-223910)       <target type='serial' port='0'/>
	I0927 02:01:08.898427   75731 main.go:141] libmachine: (newest-cni-223910)     </console>
	I0927 02:01:08.898431   75731 main.go:141] libmachine: (newest-cni-223910)     <rng model='virtio'>
	I0927 02:01:08.898437   75731 main.go:141] libmachine: (newest-cni-223910)       <backend model='random'>/dev/random</backend>
	I0927 02:01:08.898443   75731 main.go:141] libmachine: (newest-cni-223910)     </rng>
	I0927 02:01:08.898448   75731 main.go:141] libmachine: (newest-cni-223910)     
	I0927 02:01:08.898454   75731 main.go:141] libmachine: (newest-cni-223910)     
	I0927 02:01:08.898459   75731 main.go:141] libmachine: (newest-cni-223910)   </devices>
	I0927 02:01:08.898468   75731 main.go:141] libmachine: (newest-cni-223910) </domain>
	I0927 02:01:08.898474   75731 main.go:141] libmachine: (newest-cni-223910) 
	I0927 02:01:08.902741   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:1e:d1:0a in network default
	I0927 02:01:08.903368   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:08.903390   75731 main.go:141] libmachine: (newest-cni-223910) Ensuring networks are active...
	I0927 02:01:08.904006   75731 main.go:141] libmachine: (newest-cni-223910) Ensuring network default is active
	I0927 02:01:08.904338   75731 main.go:141] libmachine: (newest-cni-223910) Ensuring network mk-newest-cni-223910 is active
	I0927 02:01:08.904868   75731 main.go:141] libmachine: (newest-cni-223910) Getting domain xml...
	I0927 02:01:08.905608   75731 main.go:141] libmachine: (newest-cni-223910) Creating domain...
	I0927 02:01:10.175375   75731 main.go:141] libmachine: (newest-cni-223910) Waiting to get IP...
	I0927 02:01:10.176233   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:10.176668   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:10.176704   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:10.176651   75754 retry.go:31] will retry after 228.431471ms: waiting for machine to come up
	I0927 02:01:10.407261   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:10.407835   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:10.407863   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:10.407783   75754 retry.go:31] will retry after 384.808084ms: waiting for machine to come up
	I0927 02:01:10.794241   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:10.794603   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:10.794628   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:10.794556   75754 retry.go:31] will retry after 339.769635ms: waiting for machine to come up
	I0927 02:01:11.136000   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:11.136534   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:11.136564   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:11.136482   75754 retry.go:31] will retry after 374.894947ms: waiting for machine to come up
	I0927 02:01:11.513080   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:11.513526   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:11.513545   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:11.513487   75754 retry.go:31] will retry after 492.326003ms: waiting for machine to come up
	I0927 02:01:12.007009   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:12.007501   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:12.007536   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:12.007461   75754 retry.go:31] will retry after 643.587149ms: waiting for machine to come up
	I0927 02:01:12.652341   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:12.652790   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:12.652819   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:12.652743   75754 retry.go:31] will retry after 1.039394029s: waiting for machine to come up
	I0927 02:01:13.693948   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:13.694337   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:13.694372   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:13.694305   75754 retry.go:31] will retry after 925.077615ms: waiting for machine to come up
	I0927 02:01:14.621172   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:14.621635   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:14.621659   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:14.621591   75754 retry.go:31] will retry after 1.685822936s: waiting for machine to come up
	I0927 02:01:16.308510   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:16.309050   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:16.309074   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:16.309010   75754 retry.go:31] will retry after 1.898275667s: waiting for machine to come up
	I0927 02:01:18.208549   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:18.209012   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:18.209042   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:18.208978   75754 retry.go:31] will retry after 2.148167624s: waiting for machine to come up
	I0927 02:01:20.358343   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:20.358796   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:20.358821   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:20.358752   75754 retry.go:31] will retry after 2.836954346s: waiting for machine to come up
	I0927 02:01:23.197614   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:23.198052   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:23.198073   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:23.198012   75754 retry.go:31] will retry after 4.152765824s: waiting for machine to come up
	I0927 02:01:27.355712   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:27.356165   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find current IP address of domain newest-cni-223910 in network mk-newest-cni-223910
	I0927 02:01:27.356196   75731 main.go:141] libmachine: (newest-cni-223910) DBG | I0927 02:01:27.356090   75754 retry.go:31] will retry after 4.659504645s: waiting for machine to come up
	I0927 02:01:32.019418   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.019894   75731 main.go:141] libmachine: (newest-cni-223910) Found IP for machine: 192.168.72.172
	I0927 02:01:32.019934   75731 main.go:141] libmachine: (newest-cni-223910) Reserving static IP address...
	I0927 02:01:32.019954   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has current primary IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.020309   75731 main.go:141] libmachine: (newest-cni-223910) DBG | unable to find host DHCP lease matching {name: "newest-cni-223910", mac: "52:54:00:32:8e:20", ip: "192.168.72.172"} in network mk-newest-cni-223910
	I0927 02:01:32.097619   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Getting to WaitForSSH function...
	I0927 02:01:32.097649   75731 main.go:141] libmachine: (newest-cni-223910) Reserved static IP address: 192.168.72.172
	I0927 02:01:32.097661   75731 main.go:141] libmachine: (newest-cni-223910) Waiting for SSH to be available...
	I0927 02:01:32.100116   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.100496   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.100525   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.100650   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Using SSH client type: external
	I0927 02:01:32.100674   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/id_rsa (-rw-------)
	I0927 02:01:32.100724   75731 main.go:141] libmachine: (newest-cni-223910) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 02:01:32.100743   75731 main.go:141] libmachine: (newest-cni-223910) DBG | About to run SSH command:
	I0927 02:01:32.100755   75731 main.go:141] libmachine: (newest-cni-223910) DBG | exit 0
	I0927 02:01:32.223343   75731 main.go:141] libmachine: (newest-cni-223910) DBG | SSH cmd err, output: <nil>: 
	I0927 02:01:32.223600   75731 main.go:141] libmachine: (newest-cni-223910) KVM machine creation complete!
	I0927 02:01:32.223894   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetConfigRaw
	I0927 02:01:32.224459   75731 main.go:141] libmachine: (newest-cni-223910) Calling .DriverName
	I0927 02:01:32.224669   75731 main.go:141] libmachine: (newest-cni-223910) Calling .DriverName
	I0927 02:01:32.224811   75731 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 02:01:32.224826   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetState
	I0927 02:01:32.226274   75731 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 02:01:32.226286   75731 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 02:01:32.226291   75731 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 02:01:32.226296   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:32.228737   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.229027   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.229050   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.229152   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:32.229333   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.229512   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.229645   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:32.229816   75731 main.go:141] libmachine: Using SSH client type: native
	I0927 02:01:32.230042   75731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0927 02:01:32.230055   75731 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 02:01:32.326683   75731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 02:01:32.326706   75731 main.go:141] libmachine: Detecting the provisioner...
	I0927 02:01:32.326716   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:32.329390   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.329719   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.329743   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.329888   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:32.330064   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.330194   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.330345   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:32.330498   75731 main.go:141] libmachine: Using SSH client type: native
	I0927 02:01:32.330677   75731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0927 02:01:32.330687   75731 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 02:01:32.428393   75731 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 02:01:32.428465   75731 main.go:141] libmachine: found compatible host: buildroot
	I0927 02:01:32.428475   75731 main.go:141] libmachine: Provisioning with buildroot...
	I0927 02:01:32.428482   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetMachineName
	I0927 02:01:32.428722   75731 buildroot.go:166] provisioning hostname "newest-cni-223910"
	I0927 02:01:32.428746   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetMachineName
	I0927 02:01:32.428946   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:32.431451   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.431836   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.431863   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.432043   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:32.432254   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.432437   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.432569   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:32.432721   75731 main.go:141] libmachine: Using SSH client type: native
	I0927 02:01:32.432869   75731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0927 02:01:32.432879   75731 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-223910 && echo "newest-cni-223910" | sudo tee /etc/hostname
	I0927 02:01:32.548299   75731 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-223910
	
	I0927 02:01:32.548330   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:32.551324   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.551676   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.551703   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.551919   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:32.552105   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.552265   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.552416   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:32.552638   75731 main.go:141] libmachine: Using SSH client type: native
	I0927 02:01:32.552799   75731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0927 02:01:32.552815   75731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-223910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-223910/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-223910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 02:01:32.660434   75731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 02:01:32.660459   75731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 02:01:32.660534   75731 buildroot.go:174] setting up certificates
	I0927 02:01:32.660551   75731 provision.go:84] configureAuth start
	I0927 02:01:32.660569   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetMachineName
	I0927 02:01:32.660842   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetIP
	I0927 02:01:32.663412   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.663706   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.663738   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.663892   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:32.666012   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.666296   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.666323   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.666416   75731 provision.go:143] copyHostCerts
	I0927 02:01:32.666485   75731 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 02:01:32.666499   75731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 02:01:32.666586   75731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 02:01:32.666717   75731 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 02:01:32.666733   75731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 02:01:32.666764   75731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 02:01:32.666831   75731 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 02:01:32.666837   75731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 02:01:32.666859   75731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 02:01:32.666918   75731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.newest-cni-223910 san=[127.0.0.1 192.168.72.172 localhost minikube newest-cni-223910]
	I0927 02:01:32.745758   75731 provision.go:177] copyRemoteCerts
	I0927 02:01:32.745808   75731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 02:01:32.745829   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:32.748491   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.748824   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.748866   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.749106   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:32.749311   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.749468   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:32.749644   75731 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/id_rsa Username:docker}
	I0927 02:01:32.830030   75731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 02:01:32.854544   75731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 02:01:32.884225   75731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 02:01:32.909309   75731 provision.go:87] duration metric: took 248.7431ms to configureAuth
	I0927 02:01:32.909335   75731 buildroot.go:189] setting minikube options for container-runtime
	I0927 02:01:32.909564   75731 config.go:182] Loaded profile config "newest-cni-223910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:01:32.909645   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:32.912453   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.912773   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:32.912798   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:32.912990   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:32.913193   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.913486   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:32.913649   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:32.913795   75731 main.go:141] libmachine: Using SSH client type: native
	I0927 02:01:32.914007   75731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0927 02:01:32.914023   75731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 02:01:33.135133   75731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 02:01:33.135176   75731 main.go:141] libmachine: Checking connection to Docker...
	I0927 02:01:33.135185   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetURL
	I0927 02:01:33.136383   75731 main.go:141] libmachine: (newest-cni-223910) DBG | Using libvirt version 6000000
	I0927 02:01:33.138734   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.139126   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:33.139158   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.139376   75731 main.go:141] libmachine: Docker is up and running!
	I0927 02:01:33.139390   75731 main.go:141] libmachine: Reticulating splines...
	I0927 02:01:33.139408   75731 client.go:171] duration metric: took 24.854799078s to LocalClient.Create
	I0927 02:01:33.139429   75731 start.go:167] duration metric: took 24.854872034s to libmachine.API.Create "newest-cni-223910"
	I0927 02:01:33.139438   75731 start.go:293] postStartSetup for "newest-cni-223910" (driver="kvm2")
	I0927 02:01:33.139463   75731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 02:01:33.139478   75731 main.go:141] libmachine: (newest-cni-223910) Calling .DriverName
	I0927 02:01:33.139684   75731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 02:01:33.139706   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:33.141757   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.142133   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:33.142161   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.142266   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:33.142469   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:33.142628   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:33.142775   75731 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/id_rsa Username:docker}
	I0927 02:01:33.228879   75731 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 02:01:33.233663   75731 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 02:01:33.233684   75731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 02:01:33.233744   75731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 02:01:33.233815   75731 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 02:01:33.233899   75731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 02:01:33.246405   75731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 02:01:33.272765   75731 start.go:296] duration metric: took 133.310928ms for postStartSetup
	I0927 02:01:33.272818   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetConfigRaw
	I0927 02:01:33.273574   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetIP
	I0927 02:01:33.276084   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.276437   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:33.276466   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.276743   75731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/newest-cni-223910/config.json ...
	I0927 02:01:33.276922   75731 start.go:128] duration metric: took 25.011583931s to createHost
	I0927 02:01:33.276944   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:33.279137   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.279514   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:33.279547   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.279684   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:33.279881   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:33.280072   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:33.280275   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:33.280452   75731 main.go:141] libmachine: Using SSH client type: native
	I0927 02:01:33.280650   75731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0927 02:01:33.280660   75731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 02:01:33.388214   75731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727402493.364982103
	
	I0927 02:01:33.388236   75731 fix.go:216] guest clock: 1727402493.364982103
	I0927 02:01:33.388244   75731 fix.go:229] Guest: 2024-09-27 02:01:33.364982103 +0000 UTC Remote: 2024-09-27 02:01:33.276933471 +0000 UTC m=+25.122475611 (delta=88.048632ms)
	I0927 02:01:33.388266   75731 fix.go:200] guest clock delta is within tolerance: 88.048632ms
	I0927 02:01:33.388272   75731 start.go:83] releasing machines lock for "newest-cni-223910", held for 25.123041974s
	I0927 02:01:33.388296   75731 main.go:141] libmachine: (newest-cni-223910) Calling .DriverName
	I0927 02:01:33.388605   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetIP
	I0927 02:01:33.391687   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.392055   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:33.392081   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.392233   75731 main.go:141] libmachine: (newest-cni-223910) Calling .DriverName
	I0927 02:01:33.392769   75731 main.go:141] libmachine: (newest-cni-223910) Calling .DriverName
	I0927 02:01:33.392957   75731 main.go:141] libmachine: (newest-cni-223910) Calling .DriverName
	I0927 02:01:33.393059   75731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 02:01:33.393098   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:33.393182   75731 ssh_runner.go:195] Run: cat /version.json
	I0927 02:01:33.393207   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHHostname
	I0927 02:01:33.395683   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.396045   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:33.396076   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.396096   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.396402   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:33.396612   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:33.396703   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:33.396726   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:33.396766   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:33.396954   75731 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/id_rsa Username:docker}
	I0927 02:01:33.396967   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHPort
	I0927 02:01:33.397144   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHKeyPath
	I0927 02:01:33.397276   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetSSHUsername
	I0927 02:01:33.397425   75731 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/newest-cni-223910/id_rsa Username:docker}
	I0927 02:01:33.473407   75731 ssh_runner.go:195] Run: systemctl --version
	I0927 02:01:33.502440   75731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 02:01:33.663499   75731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 02:01:33.670180   75731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 02:01:33.670240   75731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 02:01:33.689308   75731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 02:01:33.689334   75731 start.go:495] detecting cgroup driver to use...
	I0927 02:01:33.689398   75731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 02:01:33.708721   75731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 02:01:33.723500   75731 docker.go:217] disabling cri-docker service (if available) ...
	I0927 02:01:33.723559   75731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 02:01:33.737436   75731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 02:01:33.751892   75731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 02:01:33.877238   75731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 02:01:34.042715   75731 docker.go:233] disabling docker service ...
	I0927 02:01:34.042783   75731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 02:01:34.058564   75731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 02:01:34.072573   75731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 02:01:34.201520   75731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 02:01:34.337377   75731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 02:01:34.352143   75731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 02:01:34.371402   75731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 02:01:34.371479   75731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:01:34.385788   75731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 02:01:34.385858   75731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:01:34.396884   75731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:01:34.408531   75731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:01:34.419109   75731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 02:01:34.430444   75731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:01:34.440956   75731 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:01:34.458533   75731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:01:34.468795   75731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 02:01:34.479434   75731 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 02:01:34.479500   75731 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 02:01:34.493360   75731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 02:01:34.502772   75731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 02:01:34.626064   75731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 02:01:34.734405   75731 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 02:01:34.734477   75731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 02:01:34.739114   75731 start.go:563] Will wait 60s for crictl version
	I0927 02:01:34.739164   75731 ssh_runner.go:195] Run: which crictl
	I0927 02:01:34.742976   75731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 02:01:34.788316   75731 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 02:01:34.788398   75731 ssh_runner.go:195] Run: crio --version
	I0927 02:01:34.816245   75731 ssh_runner.go:195] Run: crio --version
	I0927 02:01:34.850734   75731 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 02:01:34.852280   75731 main.go:141] libmachine: (newest-cni-223910) Calling .GetIP
	I0927 02:01:34.854685   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:34.854992   75731 main.go:141] libmachine: (newest-cni-223910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:8e:20", ip: ""} in network mk-newest-cni-223910: {Iface:virbr1 ExpiryTime:2024-09-27 03:01:23 +0000 UTC Type:0 Mac:52:54:00:32:8e:20 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:newest-cni-223910 Clientid:01:52:54:00:32:8e:20}
	I0927 02:01:34.855025   75731 main.go:141] libmachine: (newest-cni-223910) DBG | domain newest-cni-223910 has defined IP address 192.168.72.172 and MAC address 52:54:00:32:8e:20 in network mk-newest-cni-223910
	I0927 02:01:34.855220   75731 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 02:01:34.859350   75731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 02:01:34.873943   75731 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0927 02:01:34.875435   75731 kubeadm.go:883] updating cluster {Name:newest-cni-223910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-223910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 02:01:34.875587   75731 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 02:01:34.875658   75731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 02:01:34.907997   75731 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 02:01:34.908070   75731 ssh_runner.go:195] Run: which lz4
	I0927 02:01:34.912073   75731 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 02:01:34.916254   75731 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 02:01:34.916284   75731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 02:01:36.264852   75731 crio.go:462] duration metric: took 1.352813018s to copy over tarball
	I0927 02:01:36.264915   75731 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.860318765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8892f86c-df81-4965-9a15-3a8c3f2c6ac6 name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.861814424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f80dd7db-3104-46a1-96e0-fc1ecbfaeebb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.862451112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402499862410195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f80dd7db-3104-46a1-96e0-fc1ecbfaeebb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.870764620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a4b3735-4d7e-48a8-97a4-7d6690a86861 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.870847491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a4b3735-4d7e-48a8-97a4-7d6690a86861 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.871044579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401369052837039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832a7f68eca906b8f8b78a8578c2f0afaf2986a8f73d21dc599dd73aa4aa9ca5,PodSandboxId:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727401348878219805,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0,PodSandboxId:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401345652004463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727401338338708415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f,PodSandboxId:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727401338309613993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f5
04,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0,PodSandboxId:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401333596293674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05,PodSandboxId:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401333519879158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef,PodSandboxId:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401333504162598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647,PodSandboxId:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401333447335611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a4b3735-4d7e-48a8-97a4-7d6690a86861 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.917474219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12a0a601-7894-4052-83c0-429aa1bad037 name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.917598759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12a0a601-7894-4052-83c0-429aa1bad037 name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.918818478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82119da4-8586-47a8-a064-7633c465d0cc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.919731867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402499919698977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82119da4-8586-47a8-a064-7633c465d0cc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.920633719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39a5fefc-e469-447f-b9b3-36729bfbd6db name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.920785011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39a5fefc-e469-447f-b9b3-36729bfbd6db name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.921048787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401369052837039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832a7f68eca906b8f8b78a8578c2f0afaf2986a8f73d21dc599dd73aa4aa9ca5,PodSandboxId:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727401348878219805,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0,PodSandboxId:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401345652004463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727401338338708415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f,PodSandboxId:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727401338309613993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f5
04,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0,PodSandboxId:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401333596293674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05,PodSandboxId:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401333519879158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef,PodSandboxId:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401333504162598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647,PodSandboxId:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401333447335611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39a5fefc-e469-447f-b9b3-36729bfbd6db name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.967521273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ea4dd6f-14d1-434f-9e22-56b425301fae name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.967726340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ea4dd6f-14d1-434f-9e22-56b425301fae name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.970202241Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3de8c0e9-684a-4a46-9246-c09ea253f9e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.970733985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402499970700568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3de8c0e9-684a-4a46-9246-c09ea253f9e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.971816761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9123d46-a1f6-4a29-b502-a133a9a1d336 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.971883558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9123d46-a1f6-4a29-b502-a133a9a1d336 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:39 no-preload-521072 crio[714]: time="2024-09-27 02:01:39.972096746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401369052837039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832a7f68eca906b8f8b78a8578c2f0afaf2986a8f73d21dc599dd73aa4aa9ca5,PodSandboxId:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727401348878219805,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0,PodSandboxId:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401345652004463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727401338338708415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f,PodSandboxId:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727401338309613993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f5
04,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0,PodSandboxId:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401333596293674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05,PodSandboxId:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401333519879158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef,PodSandboxId:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401333504162598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647,PodSandboxId:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401333447335611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9123d46-a1f6-4a29-b502-a133a9a1d336 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:40 no-preload-521072 crio[714]: time="2024-09-27 02:01:40.043012220Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=86997b71-a37d-47cf-af6b-36b1e1e20673 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 02:01:40 no-preload-521072 crio[714]: time="2024-09-27 02:01:40.043404119Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&PodSandboxMetadata{Name:busybox,Uid:8c6c402f-4b67-4a90-8eb7-324f03f53585,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727401345661361702,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T01:42:17.785003312Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7q54t,Uid:f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17274013454609007
65,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T01:42:17.785008700Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:612b7b7a09971da2cbf9b3dbd6ff8b0d1975d944babb5387175a8a981dbe57c3,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-cc9pp,Uid:a840ca52-d2b8-47a5-b379-30504658e0d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727401343864036242,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-cc9pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a840ca52-d2b8-47a5-b379-30504658e0d0,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T01:42:17.7
85017200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b4595dc3-c439-4615-95b7-2009476c779c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727401338102723302,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T01:42:17.785011446Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&PodSandboxMetadata{Name:kube-proxy-wkcb8,Uid:ea79339c-b2f0-4cb8-ab57-4f13f689f504,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727401338101130080,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f504,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-09-27T01:42:17.785015055Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-521072,Uid:b655f3bead38c68715c574a3279ec998,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727401333294838089,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.246:2379,kubernetes.io/config.hash: b655f3bead38c68715c574a3279ec998,kubernetes.io/config.seen: 2024-09-27T01:42:12.840790193Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-521072,
Uid:6529bcc6dfdf213f612ff6952ca523ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727401333288545680,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6529bcc6dfdf213f612ff6952ca523ec,kubernetes.io/config.seen: 2024-09-27T01:42:12.778627554Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-521072,Uid:238f74dc8cff297b820edab9dffa14f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727401333284178207,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.246:8443,kubernetes.io/config.hash: 238f74dc8cff297b820edab9dffa14f9,kubernetes.io/config.seen: 2024-09-27T01:42:12.778631708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-521072,Uid:b68fc4ed89d33bb903e1ebb161b99bd4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727401333275820595,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b68fc4ed89d33bb903e1ebb161b99bd4,ku
bernetes.io/config.seen: 2024-09-27T01:42:12.778632906Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=86997b71-a37d-47cf-af6b-36b1e1e20673 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 02:01:40 no-preload-521072 crio[714]: time="2024-09-27 02:01:40.044223960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a1d770c-2286-45a3-b637-70de2c907889 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:40 no-preload-521072 crio[714]: time="2024-09-27 02:01:40.044308658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a1d770c-2286-45a3-b637-70de2c907889 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:40 no-preload-521072 crio[714]: time="2024-09-27 02:01:40.044574568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401369052837039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832a7f68eca906b8f8b78a8578c2f0afaf2986a8f73d21dc599dd73aa4aa9ca5,PodSandboxId:a20e2c9b208a01e047683e06b35b30e92411977681127879310f7d0fddfe6ad0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727401348878219805,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c6c402f-4b67-4a90-8eb7-324f03f53585,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0,PodSandboxId:dbcf3ee6d4d0bd2218bed9a78e24dda98759d150aeea1235cb15b0b15a314ee4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401345652004463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7q54t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f320e945-a1d6-4109-a0cc-5bd4e3c1bfba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c,PodSandboxId:9975596dc9c0baaab8fcb6ca04f9359781fcd0d626b9b9df1ddffcbca992d80e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727401338338708415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
4595dc3-c439-4615-95b7-2009476c779c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f,PodSandboxId:69c5b273b68533a15a49301449049daae16fb9ab05d748cb258809958d1e2e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727401338309613993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea79339c-b2f0-4cb8-ab57-4f13f689f5
04,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0,PodSandboxId:38a38f07872e89bba912447745d43f69ce430d0632bfd249cf1943751c31934c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401333596293674,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b655f3bead38c68715c574a3279ec998,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05,PodSandboxId:f271cec15bbe86fb55ef28be83f91845275647eb4fcd41656b5421639fd94dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401333519879158,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6529bcc6dfdf213f612ff6952ca523ec,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef,PodSandboxId:76615b305a8b479ed5a2c44717fda459c726b98b1ce2fabbbf782769cf68608f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401333504162598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 238f74dc8cff297b820edab9dffa14f9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647,PodSandboxId:c4ee5cc7c625309a667faa64bfe1957ae23a5241770c7c646855e08a1f5cd070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401333447335611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-521072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68fc4ed89d33bb903e1ebb161b99bd4,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a1d770c-2286-45a3-b637-70de2c907889 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8b91015e1bfce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   9975596dc9c0b       storage-provisioner
	832a7f68eca90       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   a20e2c9b208a0       busybox
	5a757b127a9ab       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   dbcf3ee6d4d0b       coredns-7c65d6cfc9-7q54t
	074b4636352f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   9975596dc9c0b       storage-provisioner
	d44b4389046f9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   69c5b273b6853       kube-proxy-wkcb8
	703936dc7e81f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   38a38f07872e8       etcd-no-preload-521072
	22e50606ae328       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   f271cec15bbe8       kube-scheduler-no-preload-521072
	d5488a6ee0ac8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   76615b305a8b4       kube-apiserver-no-preload-521072
	56ed48053950b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   c4ee5cc7c6253       kube-controller-manager-no-preload-521072
	
	
	==> coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46063 - 60754 "HINFO IN 4081009560286700448.717705552608654863. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.034835274s
	
	
	==> describe nodes <==
	Name:               no-preload-521072
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-521072
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=no-preload-521072
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_32_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:32:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-521072
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 02:01:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:58:05 +0000   Fri, 27 Sep 2024 01:32:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:58:05 +0000   Fri, 27 Sep 2024 01:32:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:58:05 +0000   Fri, 27 Sep 2024 01:32:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:58:05 +0000   Fri, 27 Sep 2024 01:42:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.246
	  Hostname:    no-preload-521072
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4d3d92178f544bd8b9e5f9464d5796b
	  System UUID:                b4d3d921-78f5-44bd-8b9e-5f9464d5796b
	  Boot ID:                    125f112c-b20d-4947-b382-b5df32c753c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-7q54t                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-521072                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-521072             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-521072    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-wkcb8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-521072             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-cc9pp              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-521072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-521072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-521072 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-521072 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-521072 event: Registered Node no-preload-521072 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-521072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-521072 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-521072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-521072 event: Registered Node no-preload-521072 in Controller
	
	
	==> dmesg <==
	[Sep27 01:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051871] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.026914] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.547868] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621717] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.271130] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.064478] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062998] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.174303] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.172334] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.308872] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[Sep27 01:42] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	[  +0.058722] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.334460] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +3.304612] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.195729] systemd-fstab-generator[1998]: Ignoring "noauto" option for root device
	[  +0.118715] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.726397] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] <==
	{"level":"info","ts":"2024-09-27T01:57:15.815680Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1081,"took":"3.743313ms","hash":3599331725,"current-db-size-bytes":2719744,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1605632,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-27T01:57:15.815732Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3599331725,"revision":1081,"compact-revision":838}
	{"level":"warn","ts":"2024-09-27T02:01:40.677715Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10378706081040202358,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-27T02:01:41.037332Z","caller":"traceutil/trace.go:171","msg":"trace[147382551] linearizableReadLoop","detail":"{readStateIndex:1809; appliedIndex:1808; }","duration":"859.71809ms","start":"2024-09-27T02:01:40.177575Z","end":"2024-09-27T02:01:41.037293Z","steps":["trace[147382551] 'read index received'  (duration: 859.516421ms)","trace[147382551] 'applied index is now lower than readState.Index'  (duration: 201.108µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T02:01:41.037449Z","caller":"traceutil/trace.go:171","msg":"trace[127727036] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"911.565632ms","start":"2024-09-27T02:01:40.125871Z","end":"2024-09-27T02:01:41.037437Z","steps":["trace[127727036] 'process raft request'  (duration: 911.296179ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:01:41.037833Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"708.321325ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:01:41.037954Z","caller":"traceutil/trace.go:171","msg":"trace[1178415286] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1537; }","duration":"708.492644ms","start":"2024-09-27T02:01:40.329453Z","end":"2024-09-27T02:01:41.037945Z","steps":["trace[1178415286] 'agreement among raft nodes before linearized reading'  (duration: 708.303277ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:01:41.038144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"860.554653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:01:41.038188Z","caller":"traceutil/trace.go:171","msg":"trace[1532500980] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1537; }","duration":"860.611227ms","start":"2024-09-27T02:01:40.177571Z","end":"2024-09-27T02:01:41.038182Z","steps":["trace[1532500980] 'agreement among raft nodes before linearized reading'  (duration: 860.542019ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:01:41.038234Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T02:01:40.177536Z","time spent":"860.688425ms","remote":"127.0.0.1:35888","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-27T02:01:41.038457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"681.718504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.246\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-09-27T02:01:41.039084Z","caller":"traceutil/trace.go:171","msg":"trace[214554157] range","detail":"{range_begin:/registry/masterleases/192.168.50.246; range_end:; response_count:1; response_revision:1537; }","duration":"682.34525ms","start":"2024-09-27T02:01:40.356730Z","end":"2024-09-27T02:01:41.039075Z","steps":["trace[214554157] 'agreement among raft nodes before linearized reading'  (duration: 681.690531ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:01:41.039148Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T02:01:40.356636Z","time spent":"682.500603ms","remote":"127.0.0.1:35922","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":157,"request content":"key:\"/registry/masterleases/192.168.50.246\" "}
	{"level":"warn","ts":"2024-09-27T02:01:41.039196Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T02:01:40.125850Z","time spent":"911.62213ms","remote":"127.0.0.1:36064","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1536 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-27T02:01:41.266087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.718911ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10378706081040202363 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1008923124967a7a>","response":"size:39"}
	{"level":"info","ts":"2024-09-27T02:01:41.266175Z","caller":"traceutil/trace.go:171","msg":"trace[2049818943] linearizableReadLoop","detail":"{readStateIndex:1810; appliedIndex:1809; }","duration":"225.686207ms","start":"2024-09-27T02:01:41.040477Z","end":"2024-09-27T02:01:41.266164Z","steps":["trace[2049818943] 'read index received'  (duration: 90.556892ms)","trace[2049818943] 'applied index is now lower than readState.Index'  (duration: 135.128292ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T02:01:41.266272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.784512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:01:41.266295Z","caller":"traceutil/trace.go:171","msg":"trace[556152912] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1537; }","duration":"225.813413ms","start":"2024-09-27T02:01:41.040475Z","end":"2024-09-27T02:01:41.266288Z","steps":["trace[556152912] 'agreement among raft nodes before linearized reading'  (duration: 225.761518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:01:41.547031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.290559ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10378706081040202366 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.246\" mod_revision:1530 > success:<request_put:<key:\"/registry/masterleases/192.168.50.246\" value_size:67 lease:1155334044185426554 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.246\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-27T02:01:41.547563Z","caller":"traceutil/trace.go:171","msg":"trace[76969396] linearizableReadLoop","detail":"{readStateIndex:1811; appliedIndex:1810; }","duration":"218.410937ms","start":"2024-09-27T02:01:41.329044Z","end":"2024-09-27T02:01:41.547455Z","steps":["trace[76969396] 'read index received'  (duration: 78.544333ms)","trace[76969396] 'applied index is now lower than readState.Index'  (duration: 139.864526ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T02:01:41.547596Z","caller":"traceutil/trace.go:171","msg":"trace[1963079218] transaction","detail":"{read_only:false; response_revision:1538; number_of_response:1; }","duration":"279.429845ms","start":"2024-09-27T02:01:41.268151Z","end":"2024-09-27T02:01:41.547581Z","steps":["trace[1963079218] 'process raft request'  (duration: 139.48051ms)","trace[1963079218] 'compare'  (duration: 138.937824ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T02:01:41.547720Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.667681ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:01:41.548185Z","caller":"traceutil/trace.go:171","msg":"trace[843855701] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1538; }","duration":"219.137485ms","start":"2024-09-27T02:01:41.329039Z","end":"2024-09-27T02:01:41.548176Z","steps":["trace[843855701] 'agreement among raft nodes before linearized reading'  (duration: 218.580664ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:01:41.821613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.314102ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10378706081040202371 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-521072\" mod_revision:1531 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-521072\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-521072\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-27T02:01:41.821791Z","caller":"traceutil/trace.go:171","msg":"trace[1684617281] transaction","detail":"{read_only:false; response_revision:1539; number_of_response:1; }","duration":"184.973908ms","start":"2024-09-27T02:01:41.636800Z","end":"2024-09-27T02:01:41.821774Z","steps":["trace[1684617281] 'process raft request'  (duration: 13.433321ms)","trace[1684617281] 'compare'  (duration: 171.229056ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:01:42 up 20 min,  0 users,  load average: 0.04, 0.14, 0.10
	Linux no-preload-521072 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] <==
	W0927 01:57:18.131031       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:57:18.131284       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:57:18.132328       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:57:18.132389       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 01:58:18.133209       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:58:18.133373       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 01:58:18.133218       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 01:58:18.133518       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 01:58:18.134717       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:58:18.134787       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 02:00:18.135088       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:00:18.135191       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0927 02:00:18.135262       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:00:18.135329       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 02:00:18.136375       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 02:00:18.136443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] <==
	E0927 01:56:22.830270       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:56:23.327745       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:56:52.837518       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:56:53.335451       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:57:22.846051       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:57:23.343401       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:57:52.852576       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:57:53.351245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:58:05.369903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-521072"
	E0927 01:58:22.866434       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:58:23.359003       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:58:44.876087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="282.764µs"
	E0927 01:58:52.872424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:58:53.366638       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 01:58:56.878335       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="164.814µs"
	E0927 01:59:22.881517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:59:23.374202       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:59:52.888763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:59:53.381951       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:00:22.899595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:00:23.390231       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:00:52.906315       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:00:53.397691       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:01:22.912557       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:01:23.406773       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:42:18.546176       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:42:18.554753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.246"]
	E0927 01:42:18.554979       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:42:18.591620       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:42:18.591775       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:42:18.591818       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:42:18.594391       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:42:18.594803       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:42:18.594851       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:42:18.596622       1 config.go:199] "Starting service config controller"
	I0927 01:42:18.596755       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:42:18.596808       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:42:18.596826       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:42:18.597297       1 config.go:328] "Starting node config controller"
	I0927 01:42:18.597760       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:42:18.697461       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 01:42:18.697601       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:42:18.699030       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] <==
	I0927 01:42:14.776806       1 serving.go:386] Generated self-signed cert in-memory
	W0927 01:42:17.114966       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 01:42:17.115081       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 01:42:17.115094       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 01:42:17.115102       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 01:42:17.155844       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 01:42:17.155891       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:42:17.161078       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 01:42:17.161500       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 01:42:17.161896       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:42:17.162156       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 01:42:17.263186       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 02:00:26 no-preload-521072 kubelet[1371]: E0927 02:00:26.858061    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 02:00:33 no-preload-521072 kubelet[1371]: E0927 02:00:33.138496    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402433138248074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:00:33 no-preload-521072 kubelet[1371]: E0927 02:00:33.138602    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402433138248074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:00:38 no-preload-521072 kubelet[1371]: E0927 02:00:38.857703    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 02:00:43 no-preload-521072 kubelet[1371]: E0927 02:00:43.139913    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402443139480905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:00:43 no-preload-521072 kubelet[1371]: E0927 02:00:43.139967    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402443139480905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:00:50 no-preload-521072 kubelet[1371]: E0927 02:00:50.859010    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 02:00:53 no-preload-521072 kubelet[1371]: E0927 02:00:53.141605    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402453140918813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:00:53 no-preload-521072 kubelet[1371]: E0927 02:00:53.141937    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402453140918813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:01:03 no-preload-521072 kubelet[1371]: E0927 02:01:03.144068    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402463143634435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:01:03 no-preload-521072 kubelet[1371]: E0927 02:01:03.144362    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402463143634435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:01:05 no-preload-521072 kubelet[1371]: E0927 02:01:05.857892    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 02:01:12 no-preload-521072 kubelet[1371]: E0927 02:01:12.882397    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 02:01:12 no-preload-521072 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 02:01:12 no-preload-521072 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 02:01:12 no-preload-521072 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 02:01:12 no-preload-521072 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 02:01:13 no-preload-521072 kubelet[1371]: E0927 02:01:13.146907    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402473146483897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:01:13 no-preload-521072 kubelet[1371]: E0927 02:01:13.146940    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402473146483897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:01:20 no-preload-521072 kubelet[1371]: E0927 02:01:20.857629    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 02:01:23 no-preload-521072 kubelet[1371]: E0927 02:01:23.149178    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402483148613475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:01:23 no-preload-521072 kubelet[1371]: E0927 02:01:23.149272    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402483148613475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:01:32 no-preload-521072 kubelet[1371]: E0927 02:01:32.859453    1371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cc9pp" podUID="a840ca52-d2b8-47a5-b379-30504658e0d0"
	Sep 27 02:01:33 no-preload-521072 kubelet[1371]: E0927 02:01:33.151822    1371 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402493151073942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:01:33 no-preload-521072 kubelet[1371]: E0927 02:01:33.151934    1371 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402493151073942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] <==
	I0927 01:42:18.478720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0927 01:42:48.482506       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] <==
	I0927 01:42:49.151762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:42:49.162005       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:42:49.162080       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:43:06.562249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:43:06.562385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-521072_eba5b60f-c2e6-43e6-bc1c-a3ec146ac13a!
	I0927 01:43:06.565059       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7c9c51c-2666-4847-92a6-a6408cdf07dd", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-521072_eba5b60f-c2e6-43e6-bc1c-a3ec146ac13a became leader
	I0927 01:43:06.664485       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-521072_eba5b60f-c2e6-43e6-bc1c-a3ec146ac13a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-521072 -n no-preload-521072
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-521072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cc9pp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-521072 describe pod metrics-server-6867b74b74-cc9pp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-521072 describe pod metrics-server-6867b74b74-cc9pp: exit status 1 (65.785798ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cc9pp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-521072 describe pod metrics-server-6867b74b74-cc9pp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (354.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (464.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-27 02:03:38.066979014 +0000 UTC m=+6534.252587257
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-368295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-368295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.628µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-368295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-368295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-368295 logs -n 25: (1.307302348s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-782846 sudo journalctl                       | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat                              | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat                              | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl                        | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl                        | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat                              | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo docker                           | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl                        | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl                        | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat                              | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat                              | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo                                  | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl                        | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl                        | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| delete  | -p embed-certs-245911                                | embed-certs-245911    | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	| ssh     | -p auto-782846 sudo cat                              | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo cat                              | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo containerd                       | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| start   | -p calico-782846 --memory=3072                       | calico-782846         | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl                        | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo systemctl                        | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo find                             | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-782846 sudo crio                             | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-782846                                       | auto-782846           | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC | 27 Sep 24 02:03 UTC |
	| start   | -p custom-flannel-782846                             | custom-flannel-782846 | jenkins | v1.34.0 | 27 Sep 24 02:03 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 02:03:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 02:03:21.903034   79364 out.go:345] Setting OutFile to fd 1 ...
	I0927 02:03:21.903171   79364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 02:03:21.903182   79364 out.go:358] Setting ErrFile to fd 2...
	I0927 02:03:21.903196   79364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 02:03:21.903554   79364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 02:03:21.904411   79364 out.go:352] Setting JSON to false
	I0927 02:03:21.905944   79364 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9947,"bootTime":1727392655,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 02:03:21.906087   79364 start.go:139] virtualization: kvm guest
	I0927 02:03:21.908232   79364 out.go:177] * [custom-flannel-782846] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 02:03:21.909620   79364 notify.go:220] Checking for updates...
	I0927 02:03:21.909652   79364 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 02:03:21.910978   79364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 02:03:21.912327   79364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 02:03:21.913651   79364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:03:21.914861   79364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 02:03:21.916116   79364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 02:03:21.917765   79364 config.go:182] Loaded profile config "calico-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:03:21.917882   79364 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:03:21.917997   79364 config.go:182] Loaded profile config "kindnet-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:03:21.918094   79364 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 02:03:21.959089   79364 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 02:03:21.960214   79364 start.go:297] selected driver: kvm2
	I0927 02:03:21.960227   79364 start.go:901] validating driver "kvm2" against <nil>
	I0927 02:03:21.960239   79364 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 02:03:21.960996   79364 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 02:03:21.961065   79364 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 02:03:21.978556   79364 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 02:03:21.978628   79364 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 02:03:21.978946   79364 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 02:03:21.978982   79364 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0927 02:03:21.978995   79364 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0927 02:03:21.979076   79364 start.go:340] cluster config:
	{Name:custom-flannel-782846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-782846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 02:03:21.979193   79364 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 02:03:21.980980   79364 out.go:177] * Starting "custom-flannel-782846" primary control-plane node in "custom-flannel-782846" cluster
	I0927 02:03:19.639715   77529 main.go:141] libmachine: (kindnet-782846) Reserved static IP address: 192.168.72.64
	I0927 02:03:19.639747   77529 main.go:141] libmachine: (kindnet-782846) Waiting for SSH to be available...
	I0927 02:03:19.639755   77529 main.go:141] libmachine: (kindnet-782846) DBG | Getting to WaitForSSH function...
	I0927 02:03:19.642739   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:19.643372   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:19.643399   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:19.643613   77529 main.go:141] libmachine: (kindnet-782846) DBG | Using SSH client type: external
	I0927 02:03:19.643641   77529 main.go:141] libmachine: (kindnet-782846) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/id_rsa (-rw-------)
	I0927 02:03:19.643674   77529 main.go:141] libmachine: (kindnet-782846) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 02:03:19.643691   77529 main.go:141] libmachine: (kindnet-782846) DBG | About to run SSH command:
	I0927 02:03:19.643704   77529 main.go:141] libmachine: (kindnet-782846) DBG | exit 0
	I0927 02:03:19.779635   77529 main.go:141] libmachine: (kindnet-782846) DBG | SSH cmd err, output: <nil>: 
	I0927 02:03:19.779924   77529 main.go:141] libmachine: (kindnet-782846) KVM machine creation complete!
	I0927 02:03:19.780258   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetConfigRaw
	I0927 02:03:19.780875   77529 main.go:141] libmachine: (kindnet-782846) Calling .DriverName
	I0927 02:03:19.781043   77529 main.go:141] libmachine: (kindnet-782846) Calling .DriverName
	I0927 02:03:19.781212   77529 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 02:03:19.781267   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetState
	I0927 02:03:19.782892   77529 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 02:03:19.782904   77529 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 02:03:19.782909   77529 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 02:03:19.782914   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:19.785602   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:19.785954   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:19.785981   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:19.786175   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:19.786400   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:19.786581   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:19.786824   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:19.786996   77529 main.go:141] libmachine: Using SSH client type: native
	I0927 02:03:19.787206   77529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I0927 02:03:19.787219   77529 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 02:03:19.902898   77529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 02:03:19.902976   77529 main.go:141] libmachine: Detecting the provisioner...
	I0927 02:03:19.902990   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:19.905724   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:19.906115   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:19.906163   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:19.906324   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:19.906524   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:19.906691   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:19.906845   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:19.907067   77529 main.go:141] libmachine: Using SSH client type: native
	I0927 02:03:19.907247   77529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I0927 02:03:19.907258   77529 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 02:03:20.020475   77529 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 02:03:20.020565   77529 main.go:141] libmachine: found compatible host: buildroot
	I0927 02:03:20.020578   77529 main.go:141] libmachine: Provisioning with buildroot...
	I0927 02:03:20.020592   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetMachineName
	I0927 02:03:20.020825   77529 buildroot.go:166] provisioning hostname "kindnet-782846"
	I0927 02:03:20.020854   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetMachineName
	I0927 02:03:20.021063   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:20.024278   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.024702   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:20.024730   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.024898   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:20.025240   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:20.025434   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:20.025612   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:20.025783   77529 main.go:141] libmachine: Using SSH client type: native
	I0927 02:03:20.025967   77529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I0927 02:03:20.025982   77529 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-782846 && echo "kindnet-782846" | sudo tee /etc/hostname
	I0927 02:03:20.158069   77529 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-782846
	
	I0927 02:03:20.158104   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:20.161332   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.161736   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:20.161763   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.161912   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:20.162073   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:20.162197   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:20.162313   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:20.162522   77529 main.go:141] libmachine: Using SSH client type: native
	I0927 02:03:20.162729   77529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I0927 02:03:20.162746   77529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-782846' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-782846/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-782846' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 02:03:20.286655   77529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 02:03:20.286687   77529 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 02:03:20.286728   77529 buildroot.go:174] setting up certificates
	I0927 02:03:20.286742   77529 provision.go:84] configureAuth start
	I0927 02:03:20.286755   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetMachineName
	I0927 02:03:20.287003   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetIP
	I0927 02:03:20.292419   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.292856   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:20.292885   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.293078   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:20.295647   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.296059   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:20.296088   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.296179   77529 provision.go:143] copyHostCerts
	I0927 02:03:20.296230   77529 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 02:03:20.296241   77529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 02:03:20.296300   77529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 02:03:20.296379   77529 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 02:03:20.296387   77529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 02:03:20.296410   77529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 02:03:20.296477   77529 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 02:03:20.296484   77529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 02:03:20.296504   77529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 02:03:20.296552   77529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.kindnet-782846 san=[127.0.0.1 192.168.72.64 kindnet-782846 localhost minikube]
	I0927 02:03:20.687288   77529 provision.go:177] copyRemoteCerts
	I0927 02:03:20.687384   77529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 02:03:20.687406   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:20.690542   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.690937   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:20.690952   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.691141   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:20.691378   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:20.691534   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:20.691655   77529 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/id_rsa Username:docker}
	I0927 02:03:20.780161   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 02:03:20.806469   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0927 02:03:20.834080   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 02:03:20.862575   77529 provision.go:87] duration metric: took 575.818714ms to configureAuth
	I0927 02:03:20.862599   77529 buildroot.go:189] setting minikube options for container-runtime
	I0927 02:03:20.862742   77529 config.go:182] Loaded profile config "kindnet-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 02:03:20.862808   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:20.865362   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.865710   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:20.865737   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:20.865841   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:20.866056   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:20.866212   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:20.866373   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:20.866522   77529 main.go:141] libmachine: Using SSH client type: native
	I0927 02:03:20.866718   77529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I0927 02:03:20.866733   77529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 02:03:21.123791   77529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 02:03:21.123816   77529 main.go:141] libmachine: Checking connection to Docker...
	I0927 02:03:21.123826   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetURL
	I0927 02:03:21.125022   77529 main.go:141] libmachine: (kindnet-782846) DBG | Using libvirt version 6000000
	I0927 02:03:21.127093   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.127760   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:21.127786   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.128022   77529 main.go:141] libmachine: Docker is up and running!
	I0927 02:03:21.128039   77529 main.go:141] libmachine: Reticulating splines...
	I0927 02:03:21.128050   77529 client.go:171] duration metric: took 26.763952648s to LocalClient.Create
	I0927 02:03:21.128089   77529 start.go:167] duration metric: took 26.764027137s to libmachine.API.Create "kindnet-782846"
	I0927 02:03:21.128104   77529 start.go:293] postStartSetup for "kindnet-782846" (driver="kvm2")
	I0927 02:03:21.128121   77529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 02:03:21.128147   77529 main.go:141] libmachine: (kindnet-782846) Calling .DriverName
	I0927 02:03:21.128524   77529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 02:03:21.128547   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:21.131341   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.131672   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:21.131701   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.131869   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:21.132058   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:21.132219   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:21.132384   77529 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/id_rsa Username:docker}
	I0927 02:03:21.218033   77529 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 02:03:21.222292   77529 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 02:03:21.222316   77529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 02:03:21.222375   77529 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 02:03:21.222472   77529 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 02:03:21.222571   77529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 02:03:21.232378   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 02:03:21.257499   77529 start.go:296] duration metric: took 129.376871ms for postStartSetup
	I0927 02:03:21.257553   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetConfigRaw
	I0927 02:03:21.258110   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetIP
	I0927 02:03:21.261068   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.261434   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:21.261470   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.261723   77529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/config.json ...
	I0927 02:03:21.261967   77529 start.go:128] duration metric: took 26.916360737s to createHost
	I0927 02:03:21.261995   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:21.264372   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.264746   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:21.264774   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.265133   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:21.265339   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:21.265524   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:21.265699   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:21.265896   77529 main.go:141] libmachine: Using SSH client type: native
	I0927 02:03:21.266101   77529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I0927 02:03:21.266116   77529 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 02:03:21.376199   77529 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727402601.354471674
	
	I0927 02:03:21.376220   77529 fix.go:216] guest clock: 1727402601.354471674
	I0927 02:03:21.376229   77529 fix.go:229] Guest: 2024-09-27 02:03:21.354471674 +0000 UTC Remote: 2024-09-27 02:03:21.261980103 +0000 UTC m=+27.021611434 (delta=92.491571ms)
	I0927 02:03:21.376275   77529 fix.go:200] guest clock delta is within tolerance: 92.491571ms
	I0927 02:03:21.376286   77529 start.go:83] releasing machines lock for "kindnet-782846", held for 27.030753841s
	I0927 02:03:21.376312   77529 main.go:141] libmachine: (kindnet-782846) Calling .DriverName
	I0927 02:03:21.376598   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetIP
	I0927 02:03:21.646725   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.647081   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:21.647116   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.647292   77529 main.go:141] libmachine: (kindnet-782846) Calling .DriverName
	I0927 02:03:21.647873   77529 main.go:141] libmachine: (kindnet-782846) Calling .DriverName
	I0927 02:03:21.648036   77529 main.go:141] libmachine: (kindnet-782846) Calling .DriverName
	I0927 02:03:21.648117   77529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 02:03:21.648168   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:21.648236   77529 ssh_runner.go:195] Run: cat /version.json
	I0927 02:03:21.648261   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHHostname
	I0927 02:03:21.659383   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.666150   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:21.666186   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.666212   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.666539   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:21.666781   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:21.666990   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:21.667172   77529 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/id_rsa Username:docker}
	I0927 02:03:21.671587   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:21.671612   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:21.671812   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHPort
	I0927 02:03:21.672024   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHKeyPath
	I0927 02:03:21.672229   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetSSHUsername
	I0927 02:03:21.672411   77529 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/kindnet-782846/id_rsa Username:docker}
	I0927 02:03:21.768986   77529 ssh_runner.go:195] Run: systemctl --version
	I0927 02:03:21.775466   77529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 02:03:21.950851   77529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 02:03:21.959903   77529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 02:03:21.959964   77529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 02:03:21.978938   77529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 02:03:21.978958   77529 start.go:495] detecting cgroup driver to use...
	I0927 02:03:21.979011   77529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 02:03:21.996211   77529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 02:03:22.010499   77529 docker.go:217] disabling cri-docker service (if available) ...
	I0927 02:03:22.010567   77529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 02:03:22.024636   77529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 02:03:22.038626   77529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 02:03:22.164034   77529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 02:03:22.325386   77529 docker.go:233] disabling docker service ...
	I0927 02:03:22.325459   77529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 02:03:22.340812   77529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 02:03:22.354294   77529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 02:03:22.491621   77529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 02:03:22.632462   77529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 02:03:22.650105   77529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 02:03:22.672586   77529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 02:03:22.672641   77529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:03:22.685265   77529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 02:03:22.685335   77529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:03:22.696600   77529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:03:22.707511   77529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:03:22.718908   77529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 02:03:22.730399   77529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:03:22.742515   77529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:03:22.762320   77529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 02:03:22.776211   77529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 02:03:22.789774   77529 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 02:03:22.789835   77529 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 02:03:22.805837   77529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 02:03:22.818429   77529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 02:03:22.957550   77529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 02:03:23.054620   77529 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 02:03:23.054701   77529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 02:03:23.059531   77529 start.go:563] Will wait 60s for crictl version
	I0927 02:03:23.059585   77529 ssh_runner.go:195] Run: which crictl
	I0927 02:03:23.063689   77529 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 02:03:23.111041   77529 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 02:03:23.111127   77529 ssh_runner.go:195] Run: crio --version
	I0927 02:03:23.143626   77529 ssh_runner.go:195] Run: crio --version
	I0927 02:03:23.178564   77529 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 02:03:23.179837   77529 main.go:141] libmachine: (kindnet-782846) Calling .GetIP
	I0927 02:03:23.183110   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:23.183590   77529 main.go:141] libmachine: (kindnet-782846) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d8:fc", ip: ""} in network mk-kindnet-782846: {Iface:virbr1 ExpiryTime:2024-09-27 03:03:09 +0000 UTC Type:0 Mac:52:54:00:f4:d8:fc Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:kindnet-782846 Clientid:01:52:54:00:f4:d8:fc}
	I0927 02:03:23.183623   77529 main.go:141] libmachine: (kindnet-782846) DBG | domain kindnet-782846 has defined IP address 192.168.72.64 and MAC address 52:54:00:f4:d8:fc in network mk-kindnet-782846
	I0927 02:03:23.183876   77529 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 02:03:23.188745   77529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 02:03:23.207101   77529 kubeadm.go:883] updating cluster {Name:kindnet-782846 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:kindnet-782846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.64 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 02:03:23.207253   77529 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 02:03:23.207358   77529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 02:03:23.248444   77529 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 02:03:23.248503   77529 ssh_runner.go:195] Run: which lz4
	I0927 02:03:23.252954   77529 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 02:03:23.257411   77529 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 02:03:23.257454   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 02:03:21.379410   79102 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 02:03:21.379565   79102 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 02:03:21.379624   79102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 02:03:21.396412   79102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I0927 02:03:21.396859   79102 main.go:141] libmachine: () Calling .GetVersion
	I0927 02:03:21.397371   79102 main.go:141] libmachine: Using API Version  1
	I0927 02:03:21.397393   79102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 02:03:21.397690   79102 main.go:141] libmachine: () Calling .GetMachineName
	I0927 02:03:21.397876   79102 main.go:141] libmachine: (calico-782846) Calling .GetMachineName
	I0927 02:03:21.398006   79102 main.go:141] libmachine: (calico-782846) Calling .DriverName
	I0927 02:03:21.398165   79102 start.go:159] libmachine.API.Create for "calico-782846" (driver="kvm2")
	I0927 02:03:21.398193   79102 client.go:168] LocalClient.Create starting
	I0927 02:03:21.398237   79102 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem
	I0927 02:03:21.398271   79102 main.go:141] libmachine: Decoding PEM data...
	I0927 02:03:21.398288   79102 main.go:141] libmachine: Parsing certificate...
	I0927 02:03:21.398336   79102 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem
	I0927 02:03:21.398363   79102 main.go:141] libmachine: Decoding PEM data...
	I0927 02:03:21.398376   79102 main.go:141] libmachine: Parsing certificate...
	I0927 02:03:21.398405   79102 main.go:141] libmachine: Running pre-create checks...
	I0927 02:03:21.398413   79102 main.go:141] libmachine: (calico-782846) Calling .PreCreateCheck
	I0927 02:03:21.398704   79102 main.go:141] libmachine: (calico-782846) Calling .GetConfigRaw
	I0927 02:03:21.399077   79102 main.go:141] libmachine: Creating machine...
	I0927 02:03:21.399091   79102 main.go:141] libmachine: (calico-782846) Calling .Create
	I0927 02:03:21.399209   79102 main.go:141] libmachine: (calico-782846) Creating KVM machine...
	I0927 02:03:21.645022   79102 main.go:141] libmachine: (calico-782846) DBG | found existing default KVM network
	I0927 02:03:21.646584   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:21.646395   79311 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000202990}
	I0927 02:03:21.646612   79102 main.go:141] libmachine: (calico-782846) DBG | created network xml: 
	I0927 02:03:21.646629   79102 main.go:141] libmachine: (calico-782846) DBG | <network>
	I0927 02:03:21.646641   79102 main.go:141] libmachine: (calico-782846) DBG |   <name>mk-calico-782846</name>
	I0927 02:03:21.646649   79102 main.go:141] libmachine: (calico-782846) DBG |   <dns enable='no'/>
	I0927 02:03:21.646663   79102 main.go:141] libmachine: (calico-782846) DBG |   
	I0927 02:03:21.646697   79102 main.go:141] libmachine: (calico-782846) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 02:03:21.646722   79102 main.go:141] libmachine: (calico-782846) DBG |     <dhcp>
	I0927 02:03:21.646739   79102 main.go:141] libmachine: (calico-782846) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 02:03:21.646747   79102 main.go:141] libmachine: (calico-782846) DBG |     </dhcp>
	I0927 02:03:21.646756   79102 main.go:141] libmachine: (calico-782846) DBG |   </ip>
	I0927 02:03:21.646762   79102 main.go:141] libmachine: (calico-782846) DBG |   
	I0927 02:03:21.646771   79102 main.go:141] libmachine: (calico-782846) DBG | </network>
	I0927 02:03:21.646782   79102 main.go:141] libmachine: (calico-782846) DBG | 
	I0927 02:03:21.652278   79102 main.go:141] libmachine: (calico-782846) DBG | trying to create private KVM network mk-calico-782846 192.168.39.0/24...
	I0927 02:03:21.728808   79102 main.go:141] libmachine: (calico-782846) DBG | private KVM network mk-calico-782846 192.168.39.0/24 created
	I0927 02:03:21.728843   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:21.728793   79311 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:03:21.728855   79102 main.go:141] libmachine: (calico-782846) Setting up store path in /home/jenkins/minikube-integration/19711-14935/.minikube/machines/calico-782846 ...
	I0927 02:03:21.728884   79102 main.go:141] libmachine: (calico-782846) Building disk image from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 02:03:21.728962   79102 main.go:141] libmachine: (calico-782846) Downloading /home/jenkins/minikube-integration/19711-14935/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 02:03:22.016866   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:22.016726   79311 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/calico-782846/id_rsa...
	I0927 02:03:22.113657   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:22.113536   79311 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/calico-782846/calico-782846.rawdisk...
	I0927 02:03:22.113683   79102 main.go:141] libmachine: (calico-782846) DBG | Writing magic tar header
	I0927 02:03:22.113697   79102 main.go:141] libmachine: (calico-782846) DBG | Writing SSH key tar header
	I0927 02:03:22.113709   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:22.113657   79311 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/calico-782846 ...
	I0927 02:03:22.113779   79102 main.go:141] libmachine: (calico-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/calico-782846
	I0927 02:03:22.113808   79102 main.go:141] libmachine: (calico-782846) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines/calico-782846 (perms=drwx------)
	I0927 02:03:22.113823   79102 main.go:141] libmachine: (calico-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube/machines
	I0927 02:03:22.113835   79102 main.go:141] libmachine: (calico-782846) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube/machines (perms=drwxr-xr-x)
	I0927 02:03:22.113844   79102 main.go:141] libmachine: (calico-782846) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935/.minikube (perms=drwxr-xr-x)
	I0927 02:03:22.113852   79102 main.go:141] libmachine: (calico-782846) Setting executable bit set on /home/jenkins/minikube-integration/19711-14935 (perms=drwxrwxr-x)
	I0927 02:03:22.113862   79102 main.go:141] libmachine: (calico-782846) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 02:03:22.113881   79102 main.go:141] libmachine: (calico-782846) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 02:03:22.113896   79102 main.go:141] libmachine: (calico-782846) Creating domain...
	I0927 02:03:22.113914   79102 main.go:141] libmachine: (calico-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 02:03:22.113937   79102 main.go:141] libmachine: (calico-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14935
	I0927 02:03:22.113956   79102 main.go:141] libmachine: (calico-782846) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 02:03:22.113968   79102 main.go:141] libmachine: (calico-782846) DBG | Checking permissions on dir: /home/jenkins
	I0927 02:03:22.113983   79102 main.go:141] libmachine: (calico-782846) DBG | Checking permissions on dir: /home
	I0927 02:03:22.113997   79102 main.go:141] libmachine: (calico-782846) DBG | Skipping /home - not owner
	I0927 02:03:22.114939   79102 main.go:141] libmachine: (calico-782846) define libvirt domain using xml: 
	I0927 02:03:22.114962   79102 main.go:141] libmachine: (calico-782846) <domain type='kvm'>
	I0927 02:03:22.114971   79102 main.go:141] libmachine: (calico-782846)   <name>calico-782846</name>
	I0927 02:03:22.114978   79102 main.go:141] libmachine: (calico-782846)   <memory unit='MiB'>3072</memory>
	I0927 02:03:22.114987   79102 main.go:141] libmachine: (calico-782846)   <vcpu>2</vcpu>
	I0927 02:03:22.114996   79102 main.go:141] libmachine: (calico-782846)   <features>
	I0927 02:03:22.115004   79102 main.go:141] libmachine: (calico-782846)     <acpi/>
	I0927 02:03:22.115009   79102 main.go:141] libmachine: (calico-782846)     <apic/>
	I0927 02:03:22.115035   79102 main.go:141] libmachine: (calico-782846)     <pae/>
	I0927 02:03:22.115046   79102 main.go:141] libmachine: (calico-782846)     
	I0927 02:03:22.115071   79102 main.go:141] libmachine: (calico-782846)   </features>
	I0927 02:03:22.115092   79102 main.go:141] libmachine: (calico-782846)   <cpu mode='host-passthrough'>
	I0927 02:03:22.115100   79102 main.go:141] libmachine: (calico-782846)   
	I0927 02:03:22.115105   79102 main.go:141] libmachine: (calico-782846)   </cpu>
	I0927 02:03:22.115109   79102 main.go:141] libmachine: (calico-782846)   <os>
	I0927 02:03:22.115114   79102 main.go:141] libmachine: (calico-782846)     <type>hvm</type>
	I0927 02:03:22.115120   79102 main.go:141] libmachine: (calico-782846)     <boot dev='cdrom'/>
	I0927 02:03:22.115124   79102 main.go:141] libmachine: (calico-782846)     <boot dev='hd'/>
	I0927 02:03:22.115130   79102 main.go:141] libmachine: (calico-782846)     <bootmenu enable='no'/>
	I0927 02:03:22.115134   79102 main.go:141] libmachine: (calico-782846)   </os>
	I0927 02:03:22.115139   79102 main.go:141] libmachine: (calico-782846)   <devices>
	I0927 02:03:22.115146   79102 main.go:141] libmachine: (calico-782846)     <disk type='file' device='cdrom'>
	I0927 02:03:22.115154   79102 main.go:141] libmachine: (calico-782846)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/calico-782846/boot2docker.iso'/>
	I0927 02:03:22.115158   79102 main.go:141] libmachine: (calico-782846)       <target dev='hdc' bus='scsi'/>
	I0927 02:03:22.115163   79102 main.go:141] libmachine: (calico-782846)       <readonly/>
	I0927 02:03:22.115169   79102 main.go:141] libmachine: (calico-782846)     </disk>
	I0927 02:03:22.115175   79102 main.go:141] libmachine: (calico-782846)     <disk type='file' device='disk'>
	I0927 02:03:22.115183   79102 main.go:141] libmachine: (calico-782846)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 02:03:22.115276   79102 main.go:141] libmachine: (calico-782846)       <source file='/home/jenkins/minikube-integration/19711-14935/.minikube/machines/calico-782846/calico-782846.rawdisk'/>
	I0927 02:03:22.115336   79102 main.go:141] libmachine: (calico-782846)       <target dev='hda' bus='virtio'/>
	I0927 02:03:22.115354   79102 main.go:141] libmachine: (calico-782846)     </disk>
	I0927 02:03:22.115364   79102 main.go:141] libmachine: (calico-782846)     <interface type='network'>
	I0927 02:03:22.115373   79102 main.go:141] libmachine: (calico-782846)       <source network='mk-calico-782846'/>
	I0927 02:03:22.115382   79102 main.go:141] libmachine: (calico-782846)       <model type='virtio'/>
	I0927 02:03:22.115389   79102 main.go:141] libmachine: (calico-782846)     </interface>
	I0927 02:03:22.115399   79102 main.go:141] libmachine: (calico-782846)     <interface type='network'>
	I0927 02:03:22.115409   79102 main.go:141] libmachine: (calico-782846)       <source network='default'/>
	I0927 02:03:22.115422   79102 main.go:141] libmachine: (calico-782846)       <model type='virtio'/>
	I0927 02:03:22.115444   79102 main.go:141] libmachine: (calico-782846)     </interface>
	I0927 02:03:22.115468   79102 main.go:141] libmachine: (calico-782846)     <serial type='pty'>
	I0927 02:03:22.115480   79102 main.go:141] libmachine: (calico-782846)       <target port='0'/>
	I0927 02:03:22.115489   79102 main.go:141] libmachine: (calico-782846)     </serial>
	I0927 02:03:22.115498   79102 main.go:141] libmachine: (calico-782846)     <console type='pty'>
	I0927 02:03:22.115507   79102 main.go:141] libmachine: (calico-782846)       <target type='serial' port='0'/>
	I0927 02:03:22.115515   79102 main.go:141] libmachine: (calico-782846)     </console>
	I0927 02:03:22.115524   79102 main.go:141] libmachine: (calico-782846)     <rng model='virtio'>
	I0927 02:03:22.115533   79102 main.go:141] libmachine: (calico-782846)       <backend model='random'>/dev/random</backend>
	I0927 02:03:22.115546   79102 main.go:141] libmachine: (calico-782846)     </rng>
	I0927 02:03:22.115557   79102 main.go:141] libmachine: (calico-782846)     
	I0927 02:03:22.115565   79102 main.go:141] libmachine: (calico-782846)     
	I0927 02:03:22.115573   79102 main.go:141] libmachine: (calico-782846)   </devices>
	I0927 02:03:22.115591   79102 main.go:141] libmachine: (calico-782846) </domain>
	I0927 02:03:22.115604   79102 main.go:141] libmachine: (calico-782846) 
	I0927 02:03:22.119595   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:ea:06:eb in network default
	I0927 02:03:22.120240   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:22.120256   79102 main.go:141] libmachine: (calico-782846) Ensuring networks are active...
	I0927 02:03:22.121049   79102 main.go:141] libmachine: (calico-782846) Ensuring network default is active
	I0927 02:03:22.121457   79102 main.go:141] libmachine: (calico-782846) Ensuring network mk-calico-782846 is active
	I0927 02:03:22.122066   79102 main.go:141] libmachine: (calico-782846) Getting domain xml...
	I0927 02:03:22.122810   79102 main.go:141] libmachine: (calico-782846) Creating domain...
	I0927 02:03:23.454977   79102 main.go:141] libmachine: (calico-782846) Waiting to get IP...
	I0927 02:03:23.456319   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:23.456791   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:23.456817   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:23.456768   79311 retry.go:31] will retry after 214.467404ms: waiting for machine to come up
	I0927 02:03:23.673285   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:23.673883   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:23.673904   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:23.673860   79311 retry.go:31] will retry after 297.003392ms: waiting for machine to come up
	I0927 02:03:23.972398   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:23.972998   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:23.973030   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:23.972919   79311 retry.go:31] will retry after 390.677413ms: waiting for machine to come up
	I0927 02:03:24.365606   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:24.366074   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:24.366097   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:24.366050   79311 retry.go:31] will retry after 472.451956ms: waiting for machine to come up
	I0927 02:03:24.839604   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:24.840069   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:24.840111   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:24.840063   79311 retry.go:31] will retry after 601.782508ms: waiting for machine to come up
	I0927 02:03:21.982225   79364 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 02:03:21.982285   79364 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 02:03:21.982301   79364 cache.go:56] Caching tarball of preloaded images
	I0927 02:03:21.982447   79364 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 02:03:21.982465   79364 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 02:03:21.982607   79364 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/custom-flannel-782846/config.json ...
	I0927 02:03:21.982634   79364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/custom-flannel-782846/config.json: {Name:mk74d868719eeafe73b46dc277da8f3318012210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:03:21.982803   79364 start.go:360] acquireMachinesLock for custom-flannel-782846: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 02:03:24.772106   77529 crio.go:462] duration metric: took 1.519193727s to copy over tarball
	I0927 02:03:24.772175   77529 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 02:03:27.023451   77529 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251243844s)
	I0927 02:03:27.023483   77529 crio.go:469] duration metric: took 2.251350275s to extract the tarball
	I0927 02:03:27.023509   77529 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 02:03:27.063934   77529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 02:03:27.112232   77529 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 02:03:27.112254   77529 cache_images.go:84] Images are preloaded, skipping loading
	I0927 02:03:27.112263   77529 kubeadm.go:934] updating node { 192.168.72.64 8443 v1.31.1 crio true true} ...
	I0927 02:03:27.112372   77529 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-782846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kindnet-782846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0927 02:03:27.112459   77529 ssh_runner.go:195] Run: crio config
	I0927 02:03:27.158891   77529 cni.go:84] Creating CNI manager for "kindnet"
	I0927 02:03:27.158918   77529 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 02:03:27.158944   77529 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.64 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-782846 NodeName:kindnet-782846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 02:03:27.159106   77529 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-782846"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 02:03:27.159172   77529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 02:03:27.170076   77529 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 02:03:27.170149   77529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 02:03:27.181899   77529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 02:03:27.200264   77529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 02:03:27.217549   77529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0927 02:03:27.236885   77529 ssh_runner.go:195] Run: grep 192.168.72.64	control-plane.minikube.internal$ /etc/hosts
	I0927 02:03:27.241484   77529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 02:03:27.256609   77529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 02:03:27.397050   77529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 02:03:27.416179   77529 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846 for IP: 192.168.72.64
	I0927 02:03:27.416207   77529 certs.go:194] generating shared ca certs ...
	I0927 02:03:27.416228   77529 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:03:27.416431   77529 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 02:03:27.416497   77529 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 02:03:27.416510   77529 certs.go:256] generating profile certs ...
	I0927 02:03:27.416586   77529 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/client.key
	I0927 02:03:27.416611   77529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/client.crt with IP's: []
	I0927 02:03:27.570611   77529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/client.crt ...
	I0927 02:03:27.570640   77529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/client.crt: {Name:mkd9ebcbe616ffcb9a2980765c7b5d85c16de6ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:03:27.570835   77529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/client.key ...
	I0927 02:03:27.570849   77529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/client.key: {Name:mke2ec8575a486d8068f500f439fd28c499f4e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:03:27.570950   77529 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.key.ec8dcf1e
	I0927 02:03:27.570966   77529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.crt.ec8dcf1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.64]
	I0927 02:03:27.643178   77529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.crt.ec8dcf1e ...
	I0927 02:03:27.643210   77529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.crt.ec8dcf1e: {Name:mk0fb25d0508a2bf65771234694f57dfd6cb2565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:03:27.643411   77529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.key.ec8dcf1e ...
	I0927 02:03:27.643433   77529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.key.ec8dcf1e: {Name:mk862afc71861f63ea993348f6780d55844551cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:03:27.643534   77529 certs.go:381] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.crt.ec8dcf1e -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.crt
	I0927 02:03:27.643629   77529 certs.go:385] copying /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.key.ec8dcf1e -> /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.key
	I0927 02:03:27.643686   77529 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/proxy-client.key
	I0927 02:03:27.643701   77529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/proxy-client.crt with IP's: []
	I0927 02:03:27.892117   77529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/proxy-client.crt ...
	I0927 02:03:27.892152   77529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/proxy-client.crt: {Name:mk7b198ab60b62b324c86e0bc355e38465a01a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:03:27.892338   77529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/proxy-client.key ...
	I0927 02:03:27.892356   77529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/proxy-client.key: {Name:mk7cd5ddb5521dcdab974779f049f9d0890ad983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 02:03:27.892542   77529 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 02:03:27.892591   77529 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 02:03:27.892605   77529 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 02:03:27.892641   77529 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 02:03:27.892677   77529 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 02:03:27.892711   77529 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 02:03:27.892765   77529 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 02:03:27.893321   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 02:03:27.921196   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 02:03:27.946657   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 02:03:27.972493   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 02:03:27.998408   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 02:03:28.024065   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 02:03:28.097654   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 02:03:28.128338   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kindnet-782846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 02:03:28.152413   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 02:03:28.178200   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 02:03:28.203231   77529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 02:03:28.228498   77529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 02:03:28.246534   77529 ssh_runner.go:195] Run: openssl version
	I0927 02:03:28.252966   77529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 02:03:28.266145   77529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 02:03:28.271744   77529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 02:03:28.271801   77529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 02:03:28.278074   77529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 02:03:28.290529   77529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 02:03:28.304046   77529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 02:03:28.308957   77529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 02:03:28.309023   77529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 02:03:28.315706   77529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 02:03:28.327687   77529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 02:03:28.341380   77529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 02:03:28.346096   77529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 02:03:28.346149   77529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 02:03:28.352374   77529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 02:03:28.363869   77529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 02:03:28.368018   77529 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 02:03:28.368072   77529 kubeadm.go:392] StartCluster: {Name:kindnet-782846 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:kindnet-782846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.64 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 02:03:28.368141   77529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 02:03:28.368188   77529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 02:03:28.405295   77529 cri.go:89] found id: ""
	I0927 02:03:28.405389   77529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 02:03:28.415777   77529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 02:03:28.426048   77529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 02:03:28.436219   77529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 02:03:28.436240   77529 kubeadm.go:157] found existing configuration files:
	
	I0927 02:03:28.436290   77529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 02:03:28.445877   77529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 02:03:28.445932   77529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 02:03:28.455755   77529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 02:03:28.467028   77529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 02:03:28.467088   77529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 02:03:28.477511   77529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 02:03:28.488037   77529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 02:03:28.488104   77529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 02:03:28.498326   77529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 02:03:28.508820   77529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 02:03:28.508886   77529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 02:03:28.518660   77529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 02:03:28.572683   77529 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 02:03:28.572917   77529 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 02:03:28.686335   77529 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 02:03:28.686473   77529 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 02:03:28.686585   77529 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 02:03:28.697841   77529 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 02:03:28.730596   77529 out.go:235]   - Generating certificates and keys ...
	I0927 02:03:28.730714   77529 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 02:03:28.730826   77529 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 02:03:28.922716   77529 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 02:03:29.159050   77529 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 02:03:25.445492   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:25.446082   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:25.446114   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:25.446029   79311 retry.go:31] will retry after 650.812749ms: waiting for machine to come up
	I0927 02:03:26.098147   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:26.098659   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:26.098689   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:26.098629   79311 retry.go:31] will retry after 884.906293ms: waiting for machine to come up
	I0927 02:03:26.985684   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:26.986166   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:26.986195   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:26.986112   79311 retry.go:31] will retry after 1.324430701s: waiting for machine to come up
	I0927 02:03:28.312238   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:28.312576   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:28.312602   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:28.312530   79311 retry.go:31] will retry after 1.488313942s: waiting for machine to come up
	I0927 02:03:29.801991   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:29.802435   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:29.802467   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:29.802390   79311 retry.go:31] will retry after 1.540248187s: waiting for machine to come up
	I0927 02:03:29.305223   77529 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 02:03:29.374051   77529 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 02:03:29.640898   77529 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 02:03:29.641088   77529 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-782846 localhost] and IPs [192.168.72.64 127.0.0.1 ::1]
	I0927 02:03:29.732282   77529 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 02:03:29.732463   77529 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-782846 localhost] and IPs [192.168.72.64 127.0.0.1 ::1]
	I0927 02:03:30.397542   77529 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 02:03:30.666382   77529 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 02:03:30.882840   77529 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 02:03:30.882934   77529 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 02:03:31.089298   77529 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 02:03:31.233678   77529 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 02:03:31.392469   77529 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 02:03:31.493764   77529 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 02:03:31.703684   77529 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 02:03:31.704203   77529 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 02:03:31.706780   77529 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 02:03:31.708769   77529 out.go:235]   - Booting up control plane ...
	I0927 02:03:31.708889   77529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 02:03:31.709006   77529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 02:03:31.709109   77529 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 02:03:31.731978   77529 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 02:03:31.741663   77529 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 02:03:31.741731   77529 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 02:03:31.889234   77529 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 02:03:31.889369   77529 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 02:03:32.891501   77529 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002804341s
	I0927 02:03:32.891611   77529 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 02:03:31.344372   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:31.344796   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:31.344840   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:31.344777   79311 retry.go:31] will retry after 2.261127301s: waiting for machine to come up
	I0927 02:03:33.608556   79102 main.go:141] libmachine: (calico-782846) DBG | domain calico-782846 has defined MAC address 52:54:00:a3:7d:22 in network mk-calico-782846
	I0927 02:03:33.608955   79102 main.go:141] libmachine: (calico-782846) DBG | unable to find current IP address of domain calico-782846 in network mk-calico-782846
	I0927 02:03:33.609027   79102 main.go:141] libmachine: (calico-782846) DBG | I0927 02:03:33.608934   79311 retry.go:31] will retry after 2.958330848s: waiting for machine to come up
	I0927 02:03:37.892033   77529 kubeadm.go:310] [api-check] The API server is healthy after 5.00305915s
	I0927 02:03:37.906393   77529 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 02:03:37.926784   77529 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 02:03:37.969334   77529 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 02:03:37.969612   77529 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-782846 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 02:03:37.984872   77529 kubeadm.go:310] [bootstrap-token] Using token: la2xe2.6p9hoo2lyimfj7ja
	
	
	==> CRI-O <==
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.684336723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402618684311522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77cb92d7-746c-40e2-9294-5bba0f54d22c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.684906469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdceb687-285b-4919-8b9d-3642b4e977e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.684962119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdceb687-285b-4919-8b9d-3642b4e977e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.685171128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6,PodSandboxId:34fa08e76381d5327f3585326f92be6d8fc179c1f42c20ebf6b3d91fe34b05d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401601483569196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa7a054-2eee-45ee-a9bc-c305e53e1273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9,PodSandboxId:ba37dc9e76c9b1b073ad88bd2f0327d0107ea2988867fcfd436453e58c15c2a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600836728632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qkbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2725448-3f80-45d8-8bd8-49dcf8878f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8,PodSandboxId:ea14b34bae4582ee7cd6eaedaaf8b1e7cd6ed9998d9ceb5a983aeb64c39d944c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600735091485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4d7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c84ab26c-2e13-437c-b059-43c8ca1d90c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa,PodSandboxId:793fc6b52aba3570288274985bdc54c5c1715cdda1b12f9f71c794d1bb5cb74a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727401599779649068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqjdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b96945-0ffe-404f-a0d5-f8729d4248ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2,PodSandboxId:153f4fb3af3a95a081cf27afc322ac73af61a0ccead9d40a87a20ee3759a47dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401588807163672,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14efa3785d77c2217257464e631112ed,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14,PodSandboxId:87de1a0c59c4698bfcafa26276982dff9b5c8e057763658c6ce7bfd43124b2cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401588810564289,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da34e1017bd5e89c00d6e00079b023aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77,PodSandboxId:b81443ee03f2b8afb9050aeff14b824cc85ed282a92ff35b958bf4d879d6c364,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401588816734968,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743,PodSandboxId:e56247841d77007485939577db8603556d040e3696252d1c8d3a9bdb8955dda3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401588688037358,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b3e5798605efaeb253e94a59600958,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703,PodSandboxId:852e5f549b2abe254a2f88a1f75ccfcf2afa0b21bd48a28979e7bc70d0599e75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401301990748617,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdceb687-285b-4919-8b9d-3642b4e977e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.728643696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=868a5fd2-5e50-4aa1-b79f-11ffcdb854d7 name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.728740120Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=868a5fd2-5e50-4aa1-b79f-11ffcdb854d7 name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.730189806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90b00366-ae1c-41c3-9ad1-9da1c9b00cd5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.730853641Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402618730822533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90b00366-ae1c-41c3-9ad1-9da1c9b00cd5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.731602699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05610163-ae08-448e-9ba7-5857a7a1ab73 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.731699268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05610163-ae08-448e-9ba7-5857a7a1ab73 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.732959807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6,PodSandboxId:34fa08e76381d5327f3585326f92be6d8fc179c1f42c20ebf6b3d91fe34b05d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401601483569196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa7a054-2eee-45ee-a9bc-c305e53e1273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9,PodSandboxId:ba37dc9e76c9b1b073ad88bd2f0327d0107ea2988867fcfd436453e58c15c2a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600836728632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qkbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2725448-3f80-45d8-8bd8-49dcf8878f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8,PodSandboxId:ea14b34bae4582ee7cd6eaedaaf8b1e7cd6ed9998d9ceb5a983aeb64c39d944c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600735091485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4d7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c84ab26c-2e13-437c-b059-43c8ca1d90c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa,PodSandboxId:793fc6b52aba3570288274985bdc54c5c1715cdda1b12f9f71c794d1bb5cb74a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727401599779649068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqjdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b96945-0ffe-404f-a0d5-f8729d4248ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2,PodSandboxId:153f4fb3af3a95a081cf27afc322ac73af61a0ccead9d40a87a20ee3759a47dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401588807163672,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14efa3785d77c2217257464e631112ed,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14,PodSandboxId:87de1a0c59c4698bfcafa26276982dff9b5c8e057763658c6ce7bfd43124b2cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401588810564289,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da34e1017bd5e89c00d6e00079b023aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77,PodSandboxId:b81443ee03f2b8afb9050aeff14b824cc85ed282a92ff35b958bf4d879d6c364,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401588816734968,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743,PodSandboxId:e56247841d77007485939577db8603556d040e3696252d1c8d3a9bdb8955dda3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401588688037358,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b3e5798605efaeb253e94a59600958,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703,PodSandboxId:852e5f549b2abe254a2f88a1f75ccfcf2afa0b21bd48a28979e7bc70d0599e75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401301990748617,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05610163-ae08-448e-9ba7-5857a7a1ab73 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.785418247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a2acf91-daae-4f73-a3b3-07e40026f29a name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.785583613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a2acf91-daae-4f73-a3b3-07e40026f29a name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.787130725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25ceb645-0f78-4046-8bc8-629bdd6c22e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.787962783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402618787932781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25ceb645-0f78-4046-8bc8-629bdd6c22e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.788637303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bb4b0d0-f3bc-4d99-b5d5-9aec7d254c58 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.788692086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bb4b0d0-f3bc-4d99-b5d5-9aec7d254c58 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.788901309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6,PodSandboxId:34fa08e76381d5327f3585326f92be6d8fc179c1f42c20ebf6b3d91fe34b05d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401601483569196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa7a054-2eee-45ee-a9bc-c305e53e1273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9,PodSandboxId:ba37dc9e76c9b1b073ad88bd2f0327d0107ea2988867fcfd436453e58c15c2a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600836728632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qkbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2725448-3f80-45d8-8bd8-49dcf8878f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8,PodSandboxId:ea14b34bae4582ee7cd6eaedaaf8b1e7cd6ed9998d9ceb5a983aeb64c39d944c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600735091485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4d7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c84ab26c-2e13-437c-b059-43c8ca1d90c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa,PodSandboxId:793fc6b52aba3570288274985bdc54c5c1715cdda1b12f9f71c794d1bb5cb74a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727401599779649068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqjdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b96945-0ffe-404f-a0d5-f8729d4248ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2,PodSandboxId:153f4fb3af3a95a081cf27afc322ac73af61a0ccead9d40a87a20ee3759a47dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401588807163672,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14efa3785d77c2217257464e631112ed,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14,PodSandboxId:87de1a0c59c4698bfcafa26276982dff9b5c8e057763658c6ce7bfd43124b2cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401588810564289,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da34e1017bd5e89c00d6e00079b023aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77,PodSandboxId:b81443ee03f2b8afb9050aeff14b824cc85ed282a92ff35b958bf4d879d6c364,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401588816734968,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743,PodSandboxId:e56247841d77007485939577db8603556d040e3696252d1c8d3a9bdb8955dda3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401588688037358,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b3e5798605efaeb253e94a59600958,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703,PodSandboxId:852e5f549b2abe254a2f88a1f75ccfcf2afa0b21bd48a28979e7bc70d0599e75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401301990748617,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bb4b0d0-f3bc-4d99-b5d5-9aec7d254c58 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.830098862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9584229c-fef2-4237-b1be-1eab9ada3320 name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.830189493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9584229c-fef2-4237-b1be-1eab9ada3320 name=/runtime.v1.RuntimeService/Version
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.831381128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1150c5d-424f-4230-9ff9-ffc3956a8901 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.832175427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402618832115834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1150c5d-424f-4230-9ff9-ffc3956a8901 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.833441840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80242ffa-3b3c-4b75-a6e1-282a9d0035f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.833576626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80242ffa-3b3c-4b75-a6e1-282a9d0035f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:03:38 default-k8s-diff-port-368295 crio[711]: time="2024-09-27 02:03:38.833842642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6,PodSandboxId:34fa08e76381d5327f3585326f92be6d8fc179c1f42c20ebf6b3d91fe34b05d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727401601483569196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa7a054-2eee-45ee-a9bc-c305e53e1273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9,PodSandboxId:ba37dc9e76c9b1b073ad88bd2f0327d0107ea2988867fcfd436453e58c15c2a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600836728632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qkbzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2725448-3f80-45d8-8bd8-49dcf8878f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8,PodSandboxId:ea14b34bae4582ee7cd6eaedaaf8b1e7cd6ed9998d9ceb5a983aeb64c39d944c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727401600735091485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4d7pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c84ab26c-2e13-437c-b059-43c8ca1d90c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa,PodSandboxId:793fc6b52aba3570288274985bdc54c5c1715cdda1b12f9f71c794d1bb5cb74a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1727401599779649068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqjdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b96945-0ffe-404f-a0d5-f8729d4248ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2,PodSandboxId:153f4fb3af3a95a081cf27afc322ac73af61a0ccead9d40a87a20ee3759a47dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727401588807163672,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14efa3785d77c2217257464e631112ed,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14,PodSandboxId:87de1a0c59c4698bfcafa26276982dff9b5c8e057763658c6ce7bfd43124b2cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727401588810564289,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da34e1017bd5e89c00d6e00079b023aa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77,PodSandboxId:b81443ee03f2b8afb9050aeff14b824cc85ed282a92ff35b958bf4d879d6c364,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727401588816734968,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743,PodSandboxId:e56247841d77007485939577db8603556d040e3696252d1c8d3a9bdb8955dda3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727401588688037358,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b3e5798605efaeb253e94a59600958,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703,PodSandboxId:852e5f549b2abe254a2f88a1f75ccfcf2afa0b21bd48a28979e7bc70d0599e75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727401301990748617,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-368295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37ba2b30cad6c9303ffa93090a5dcf79,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80242ffa-3b3c-4b75-a6e1-282a9d0035f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b79b5c0a010e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   34fa08e76381d       storage-provisioner
	493a3f26ca3a1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   ba37dc9e76c9b       coredns-7c65d6cfc9-qkbzv
	c95c262cabaf3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   ea14b34bae458       coredns-7c65d6cfc9-4d7pk
	a82c79f60ab5f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   793fc6b52aba3       kube-proxy-kqjdq
	317a14a66de31       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 minutes ago      Running             kube-apiserver            2                   b81443ee03f2b       kube-apiserver-default-k8s-diff-port-368295
	6a46b48d9fc2e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   17 minutes ago      Running             kube-scheduler            2                   87de1a0c59c46       kube-scheduler-default-k8s-diff-port-368295
	3ed8ae1ddd989       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   153f4fb3af3a9       etcd-default-k8s-diff-port-368295
	e2b78be2052d8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 minutes ago      Running             kube-controller-manager   2                   e56247841d770       kube-controller-manager-default-k8s-diff-port-368295
	affe15a528d50       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   852e5f549b2ab       kube-apiserver-default-k8s-diff-port-368295
	
	
	==> coredns [493a3f26ca3a150405205c99d2e70dd6bddf476d596254593a52a51bbf295de9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c95c262cabaf32c716f92f43c2647b266df1bbc4abd0aaaba87ab628ca61b7d8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-368295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-368295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=default-k8s-diff-port-368295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_46_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:46:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-368295
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 02:03:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 02:02:01 +0000   Fri, 27 Sep 2024 01:46:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 02:02:01 +0000   Fri, 27 Sep 2024 01:46:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 02:02:01 +0000   Fri, 27 Sep 2024 01:46:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 02:02:01 +0000   Fri, 27 Sep 2024 01:46:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.83
	  Hostname:    default-k8s-diff-port-368295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bbfae71b1224951a97a9b446656b7e1
	  System UUID:                6bbfae71-b122-4951-a97a-9b446656b7e1
	  Boot ID:                    272a7df4-1ae5-4214-850e-73a937c641bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4d7pk                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-qkbzv                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-368295                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-368295             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-368295    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-kqjdq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-368295             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-d85zg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node default-k8s-diff-port-368295 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-368295 event: Registered Node default-k8s-diff-port-368295 in Controller
	
	
	==> dmesg <==
	[  +0.039704] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.109217] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.628859] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.930049] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.060167] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060302] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.191836] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.161112] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.308731] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +4.229591] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.060343] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.858773] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[  +5.514392] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.107459] kauditd_printk_skb: 85 callbacks suppressed
	[Sep27 01:42] kauditd_printk_skb: 2 callbacks suppressed
	[Sep27 01:46] systemd-fstab-generator[2567]: Ignoring "noauto" option for root device
	[  +0.069429] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.515408] systemd-fstab-generator[2887]: Ignoring "noauto" option for root device
	[  +0.081312] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.310497] systemd-fstab-generator[3004]: Ignoring "noauto" option for root device
	[  +0.065663] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.156847] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [3ed8ae1ddd98912c9a6489cfeaeeacd29170e4315d9183670dfd43657f3748a2] <==
	{"level":"info","ts":"2024-09-27T01:56:30.035147Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":723,"took":"8.515681ms","hash":3816834789,"current-db-size-bytes":2265088,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2265088,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-27T01:56:30.035241Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3816834789,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-09-27T02:01:30.032717Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-09-27T02:01:30.037158Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"4.091171ms","hash":4047094690,"current-db-size-bytes":2265088,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1572864,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-27T02:01:30.037237Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4047094690,"revision":966,"compact-revision":723}
	{"level":"info","ts":"2024-09-27T02:01:40.756752Z","caller":"traceutil/trace.go:171","msg":"trace[700525348] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"413.530153ms","start":"2024-09-27T02:01:40.343191Z","end":"2024-09-27T02:01:40.756721Z","steps":["trace[700525348] 'process raft request'  (duration: 413.398316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:01:40.757413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T02:01:40.343169Z","time spent":"413.679776ms","remote":"127.0.0.1:40290","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1217 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-27T02:02:17.389306Z","caller":"traceutil/trace.go:171","msg":"trace[2068986097] linearizableReadLoop","detail":"{readStateIndex:1456; appliedIndex:1455; }","duration":"284.700111ms","start":"2024-09-27T02:02:17.104578Z","end":"2024-09-27T02:02:17.389279Z","steps":["trace[2068986097] 'read index received'  (duration: 284.62405ms)","trace[2068986097] 'applied index is now lower than readState.Index'  (duration: 75.466µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T02:02:17.389435Z","caller":"traceutil/trace.go:171","msg":"trace[135469153] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"427.999151ms","start":"2024-09-27T02:02:16.961424Z","end":"2024-09-27T02:02:17.389423Z","steps":["trace[135469153] 'process raft request'  (duration: 427.718275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:02:17.389657Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T02:02:16.961408Z","time spent":"428.117848ms","remote":"127.0.0.1:40290","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1248 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-27T02:02:17.389806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.146029ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:02:17.389946Z","caller":"traceutil/trace.go:171","msg":"trace[571753890] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"264.301308ms","start":"2024-09-27T02:02:17.125633Z","end":"2024-09-27T02:02:17.389935Z","steps":["trace[571753890] 'agreement among raft nodes before linearized reading'  (duration: 264.120384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:02:17.390104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.524286ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:02:17.390148Z","caller":"traceutil/trace.go:171","msg":"trace[1022997514] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1250; }","duration":"285.570278ms","start":"2024-09-27T02:02:17.104571Z","end":"2024-09-27T02:02:17.390142Z","steps":["trace[1022997514] 'agreement among raft nodes before linearized reading'  (duration: 285.513404ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:02:19.146047Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.587842ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18072543094514519366 > lease_revoke:<id:7ace9231287a0ceb>","response":"size:28"}
	{"level":"info","ts":"2024-09-27T02:02:19.146252Z","caller":"traceutil/trace.go:171","msg":"trace[188441510] linearizableReadLoop","detail":"{readStateIndex:1457; appliedIndex:1456; }","duration":"162.67995ms","start":"2024-09-27T02:02:18.983557Z","end":"2024-09-27T02:02:19.146237Z","steps":["trace[188441510] 'read index received'  (duration: 30.608927ms)","trace[188441510] 'applied index is now lower than readState.Index'  (duration: 132.069542ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T02:02:19.146786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.213621ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:02:19.146856Z","caller":"traceutil/trace.go:171","msg":"trace[997329461] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1250; }","duration":"163.294072ms","start":"2024-09-27T02:02:18.983551Z","end":"2024-09-27T02:02:19.146845Z","steps":["trace[997329461] 'agreement among raft nodes before linearized reading'  (duration: 163.183849ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T02:02:40.384956Z","caller":"traceutil/trace.go:171","msg":"trace[1981322120] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"109.834697ms","start":"2024-09-27T02:02:40.275098Z","end":"2024-09-27T02:02:40.384933Z","steps":["trace[1981322120] 'process raft request'  (duration: 109.718354ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T02:02:41.657970Z","caller":"traceutil/trace.go:171","msg":"trace[1299844176] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"115.21012ms","start":"2024-09-27T02:02:41.542745Z","end":"2024-09-27T02:02:41.657955Z","steps":["trace[1299844176] 'process raft request'  (duration: 115.089511ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T02:03:28.049233Z","caller":"traceutil/trace.go:171","msg":"trace[547785292] transaction","detail":"{read_only:false; response_revision:1308; number_of_response:1; }","duration":"133.155548ms","start":"2024-09-27T02:03:27.916058Z","end":"2024-09-27T02:03:28.049214Z","steps":["trace[547785292] 'process raft request'  (duration: 132.977267ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:03:28.255794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.000811ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:03:28.255881Z","caller":"traceutil/trace.go:171","msg":"trace[1873915155] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1308; }","duration":"151.102365ms","start":"2024-09-27T02:03:28.104761Z","end":"2024-09-27T02:03:28.255863Z","steps":["trace[1873915155] 'range keys from in-memory index tree'  (duration: 150.977535ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T02:03:28.256443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.563179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T02:03:28.256794Z","caller":"traceutil/trace.go:171","msg":"trace[1333692799] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1308; }","duration":"134.929891ms","start":"2024-09-27T02:03:28.121841Z","end":"2024-09-27T02:03:28.256771Z","steps":["trace[1333692799] 'range keys from in-memory index tree'  (duration: 134.409843ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:03:39 up 22 min,  0 users,  load average: 0.44, 0.24, 0.14
	Linux default-k8s-diff-port-368295 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [317a14a66de31f447e0c853921f869a3b565701e7d2523240ead500e6043ab77] <==
	I0927 01:59:32.646464       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:59:32.646618       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 02:01:31.645956       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:01:31.646098       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0927 02:01:32.648532       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:01:32.648628       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0927 02:01:32.648696       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:01:32.648757       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0927 02:01:32.649779       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 02:01:32.649803       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 02:02:32.650435       1 handler_proxy.go:99] no RequestInfo found in the context
	W0927 02:02:32.650634       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 02:02:32.650971       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0927 02:02:32.651095       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 02:02:32.652324       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 02:02:32.652408       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [affe15a528d50338e85e2c06b120a63cc862e5ccd6b9647eb338b8ed9bec8703] <==
	W0927 01:46:22.052149       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.078885       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.084348       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.115591       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.146592       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.147882       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.160349       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.211615       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.216152       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.280957       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.313200       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.354748       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.399889       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.403363       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.419364       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.422921       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.425415       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.456920       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.557599       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.639781       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.852447       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:22.857169       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:23.527722       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:24.528614       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 01:46:25.915595       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e2b78be2052d8d1d68373bad53e846c75956ac519504a629da3d1498b8646743] <==
	E0927 01:58:38.741277       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:58:39.227612       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:59:08.749043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:59:09.236235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 01:59:38.756446       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 01:59:39.244387       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:00:08.763166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:00:09.252326       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:00:38.770198       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:00:39.260778       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:01:08.777780       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:01:09.269019       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:01:38.785320       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:01:39.277838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 02:02:01.200074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-368295"
	E0927 02:02:08.791860       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:02:09.287628       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 02:02:32.533998       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="271.824µs"
	E0927 02:02:38.798417       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:02:39.297813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0927 02:02:46.527657       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="213.856µs"
	E0927 02:03:08.804836       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:03:09.309121       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0927 02:03:38.812436       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 02:03:39.316608       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a82c79f60ab5f4067e751cb349b7dbfe1de7bf9e16412eaa9586c5e8c5d591aa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 01:46:40.091844       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 01:46:40.107626       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.83"]
	E0927 01:46:40.107732       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 01:46:40.184320       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 01:46:40.184353       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 01:46:40.184377       1 server_linux.go:169] "Using iptables Proxier"
	I0927 01:46:40.186873       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 01:46:40.187162       1 server.go:483] "Version info" version="v1.31.1"
	I0927 01:46:40.187174       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:46:40.206564       1 config.go:199] "Starting service config controller"
	I0927 01:46:40.206601       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 01:46:40.206660       1 config.go:105] "Starting endpoint slice config controller"
	I0927 01:46:40.206665       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 01:46:40.207220       1 config.go:328] "Starting node config controller"
	I0927 01:46:40.207227       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 01:46:40.307362       1 shared_informer.go:320] Caches are synced for node config
	I0927 01:46:40.307419       1 shared_informer.go:320] Caches are synced for service config
	I0927 01:46:40.307522       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6a46b48d9fc2ea9840922d7fa66637af8903c6811a8730ceda3091d4a0504e14] <==
	W0927 01:46:31.705014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 01:46:31.705052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:31.705448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:46:31.705535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.649278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 01:46:32.649328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.672416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:46:32.672539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.674971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 01:46:32.675017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.683626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 01:46:32.683675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.857182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 01:46:32.857287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:32.891119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:46:32.891235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:33.007204       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:46:33.007660       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 01:46:33.024206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 01:46:33.024295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:33.049815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:46:33.051590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 01:46:33.068223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:46:33.068273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 01:46:35.096130       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 02:02:34 default-k8s-diff-port-368295 kubelet[2894]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 02:02:34 default-k8s-diff-port-368295 kubelet[2894]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 02:02:34 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:02:34.815246    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402554814831972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:34 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:02:34.815273    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402554814831972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:44 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:02:44.817126    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402564816738490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:44 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:02:44.817620    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402564816738490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:46 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:02:46.511785    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 02:02:54 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:02:54.819653    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402574819192400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:02:54 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:02:54.819698    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402574819192400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:00 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:00.511705    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 02:03:04 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:04.821819    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402584821158632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:04 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:04.822120    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402584821158632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:14 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:14.513855    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 02:03:14 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:14.823527    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402594823047459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:14 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:14.823611    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402594823047459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:24 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:24.827176    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402604826398439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:24 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:24.827221    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402604826398439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:29 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:29.512351    2894 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d85zg" podUID="579ae063-049c-423c-8f91-91fb4b32e4c3"
	Sep 27 02:03:34 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:34.536726    2894 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 02:03:34 default-k8s-diff-port-368295 kubelet[2894]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 02:03:34 default-k8s-diff-port-368295 kubelet[2894]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 02:03:34 default-k8s-diff-port-368295 kubelet[2894]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 02:03:34 default-k8s-diff-port-368295 kubelet[2894]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 02:03:34 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:34.829408    2894 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402614829011092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 02:03:34 default-k8s-diff-port-368295 kubelet[2894]: E0927 02:03:34.829454    2894 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402614829011092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9b79b5c0a010e0cd81da04372248299d08c081b7ffa7928eb543c5c791c03aa6] <==
	I0927 01:46:41.611409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:46:41.638071       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:46:41.638128       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:46:41.677793       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:46:41.678206       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-368295_3498d87f-28a5-46e1-a14c-367c4949a525!
	I0927 01:46:41.680665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c1d1ddb1-990a-48fe-b592-04ca2cb062c6", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-368295_3498d87f-28a5-46e1-a14c-367c4949a525 became leader
	I0927 01:46:41.778698       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-368295_3498d87f-28a5-46e1-a14c-367c4949a525!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-368295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-d85zg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-368295 describe pod metrics-server-6867b74b74-d85zg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-368295 describe pod metrics-server-6867b74b74-d85zg: exit status 1 (65.308265ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-d85zg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-368295 describe pod metrics-server-6867b74b74-d85zg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (464.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (160.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
E0927 02:00:10.486856   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.129:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.129:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 2 (220.557506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-612261" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-612261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-612261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.18µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-612261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 2 (223.433362ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-612261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-612261 logs -n 25: (1.655895195s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| stop    | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:32 UTC |
	| start   | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:32 UTC | 27 Sep 24 01:33 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-719096 sudo                            | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-719096                                 | NoKubernetes-719096          | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-521072             | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-595331                              | cert-expiration-595331       | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	| delete  | -p                                                     | disable-driver-mounts-630210 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:33 UTC |
	|         | disable-driver-mounts-630210                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:33 UTC | 27 Sep 24 01:35 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-245911            | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-368295  | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-521072                  | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-612261        | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-521072                                   | no-preload-521072            | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-245911                 | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-245911                                  | embed-certs-245911           | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-612261             | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-612261                              | old-k8s-version-612261       | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-368295       | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-368295 | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:46 UTC |
	|         | default-k8s-diff-port-368295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:37:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:37:48.335921   69534 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:37:48.336188   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336196   69534 out.go:358] Setting ErrFile to fd 2...
	I0927 01:37:48.336201   69534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:37:48.336368   69534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:37:48.336901   69534 out.go:352] Setting JSON to false
	I0927 01:37:48.337754   69534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8413,"bootTime":1727392655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:37:48.337841   69534 start.go:139] virtualization: kvm guest
	I0927 01:37:48.340035   69534 out.go:177] * [default-k8s-diff-port-368295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:37:48.341151   69534 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:37:48.341211   69534 notify.go:220] Checking for updates...
	I0927 01:37:48.343607   69534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:37:48.344933   69534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:37:48.346113   69534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:37:48.347142   69534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:37:48.348261   69534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:37:48.349842   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:37:48.350212   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.350278   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.365272   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0927 01:37:48.365662   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.366137   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.366162   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.366548   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.366713   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.366938   69534 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:37:48.367236   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:37:48.367265   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:37:48.381678   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0927 01:37:48.382169   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:37:48.382627   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:37:48.382650   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:37:48.382911   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:37:48.383023   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:37:48.415092   69534 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 01:37:48.416340   69534 start.go:297] selected driver: kvm2
	I0927 01:37:48.416354   69534 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.416459   69534 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:37:48.417093   69534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.417164   69534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 01:37:48.432138   69534 install.go:137] /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 01:37:48.432534   69534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:37:48.432563   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:37:48.432604   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:37:48.432635   69534 start.go:340] cluster config:
	{Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:37:48.432737   69534 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:37:48.435057   69534 out.go:177] * Starting "default-k8s-diff-port-368295" primary control-plane node in "default-k8s-diff-port-368295" cluster
	I0927 01:37:48.436502   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:37:48.436543   69534 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 01:37:48.436557   69534 cache.go:56] Caching tarball of preloaded images
	I0927 01:37:48.436624   69534 preload.go:172] Found /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 01:37:48.436634   69534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 01:37:48.436718   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:37:48.436885   69534 start.go:360] acquireMachinesLock for default-k8s-diff-port-368295: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:37:50.823565   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:53.895575   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:37:59.975554   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:03.047567   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:09.127558   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:12.199592   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:18.279516   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:21.351643   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:27.435515   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:30.503604   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:36.583590   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:39.655593   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:45.735581   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:48.807587   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:54.887542   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:38:57.959601   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:04.039570   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:07.111555   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:13.191559   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:16.263625   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:22.343607   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:25.415561   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:31.495531   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:34.567598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:40.647577   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:43.719602   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:49.799620   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:52.871596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:39:58.951600   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:02.023635   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:08.103596   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:11.175614   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:17.255583   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:20.327522   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:26.407598   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:29.479580   68676 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.246:22: connect: no route to host
	I0927 01:40:32.484148   69234 start.go:364] duration metric: took 3m6.827897292s to acquireMachinesLock for "embed-certs-245911"
	I0927 01:40:32.484202   69234 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:32.484210   69234 fix.go:54] fixHost starting: 
	I0927 01:40:32.484708   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:32.484758   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:32.500356   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0927 01:40:32.500869   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:32.501356   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:40:32.501376   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:32.501678   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:32.501872   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:32.502014   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:40:32.503863   69234 fix.go:112] recreateIfNeeded on embed-certs-245911: state=Stopped err=<nil>
	I0927 01:40:32.503884   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	W0927 01:40:32.504047   69234 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:32.506829   69234 out.go:177] * Restarting existing kvm2 VM for "embed-certs-245911" ...
	I0927 01:40:32.481407   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:32.481445   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.481786   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:40:32.481815   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:40:32.482031   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:40:32.483999   68676 machine.go:96] duration metric: took 4m37.428764548s to provisionDockerMachine
	I0927 01:40:32.484048   68676 fix.go:56] duration metric: took 4m37.449461246s for fixHost
	I0927 01:40:32.484055   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 4m37.449534693s
	W0927 01:40:32.484075   68676 start.go:714] error starting host: provision: host is not running
	W0927 01:40:32.484176   68676 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0927 01:40:32.484183   68676 start.go:729] Will try again in 5 seconds ...
	I0927 01:40:32.508417   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Start
	I0927 01:40:32.508598   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring networks are active...
	I0927 01:40:32.509477   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network default is active
	I0927 01:40:32.509830   69234 main.go:141] libmachine: (embed-certs-245911) Ensuring network mk-embed-certs-245911 is active
	I0927 01:40:32.510208   69234 main.go:141] libmachine: (embed-certs-245911) Getting domain xml...
	I0927 01:40:32.510838   69234 main.go:141] libmachine: (embed-certs-245911) Creating domain...
	I0927 01:40:33.718381   69234 main.go:141] libmachine: (embed-certs-245911) Waiting to get IP...
	I0927 01:40:33.719223   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.719554   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.719611   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.719550   70125 retry.go:31] will retry after 265.21442ms: waiting for machine to come up
	I0927 01:40:33.986199   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:33.986700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:33.986728   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:33.986658   70125 retry.go:31] will retry after 308.926274ms: waiting for machine to come up
	I0927 01:40:34.297317   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.297734   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.297755   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.297697   70125 retry.go:31] will retry after 466.52815ms: waiting for machine to come up
	I0927 01:40:34.765171   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:34.765616   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:34.765643   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:34.765570   70125 retry.go:31] will retry after 510.417499ms: waiting for machine to come up
	I0927 01:40:35.277175   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.277547   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.277576   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.277488   70125 retry.go:31] will retry after 522.865286ms: waiting for machine to come up
	I0927 01:40:37.485696   68676 start.go:360] acquireMachinesLock for no-preload-521072: {Name:mkef866a3f2eb57063758ef06140cca3a2e40b18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 01:40:35.802177   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:35.802620   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:35.802646   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:35.802584   70125 retry.go:31] will retry after 611.490499ms: waiting for machine to come up
	I0927 01:40:36.415249   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:36.415733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:36.415793   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:36.415709   70125 retry.go:31] will retry after 744.420766ms: waiting for machine to come up
	I0927 01:40:37.161647   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:37.162076   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:37.162112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:37.162022   70125 retry.go:31] will retry after 1.464523837s: waiting for machine to come up
	I0927 01:40:38.627935   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:38.628275   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:38.628302   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:38.628237   70125 retry.go:31] will retry after 1.840524237s: waiting for machine to come up
	I0927 01:40:40.471433   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:40.471823   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:40.471851   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:40.471781   70125 retry.go:31] will retry after 1.9424331s: waiting for machine to come up
	I0927 01:40:42.416527   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:42.416978   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:42.417007   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:42.416935   70125 retry.go:31] will retry after 2.553410529s: waiting for machine to come up
	I0927 01:40:44.973083   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:44.973446   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:44.973465   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:44.973402   70125 retry.go:31] will retry after 3.286267983s: waiting for machine to come up
	I0927 01:40:48.260792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:48.261216   69234 main.go:141] libmachine: (embed-certs-245911) DBG | unable to find current IP address of domain embed-certs-245911 in network mk-embed-certs-245911
	I0927 01:40:48.261241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | I0927 01:40:48.261179   70125 retry.go:31] will retry after 3.302667041s: waiting for machine to come up
	I0927 01:40:52.800240   69333 start.go:364] duration metric: took 3m25.347970249s to acquireMachinesLock for "old-k8s-version-612261"
	I0927 01:40:52.800310   69333 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:40:52.800317   69333 fix.go:54] fixHost starting: 
	I0927 01:40:52.800742   69333 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:40:52.800800   69333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:40:52.818217   69333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0927 01:40:52.818644   69333 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:40:52.819065   69333 main.go:141] libmachine: Using API Version  1
	I0927 01:40:52.819086   69333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:40:52.819408   69333 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:40:52.819544   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:40:52.819646   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetState
	I0927 01:40:52.820921   69333 fix.go:112] recreateIfNeeded on old-k8s-version-612261: state=Stopped err=<nil>
	I0927 01:40:52.820956   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	W0927 01:40:52.821110   69333 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:40:52.823209   69333 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-612261" ...
	I0927 01:40:51.567691   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568205   69234 main.go:141] libmachine: (embed-certs-245911) Found IP for machine: 192.168.39.158
	I0927 01:40:51.568241   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has current primary IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.568250   69234 main.go:141] libmachine: (embed-certs-245911) Reserving static IP address...
	I0927 01:40:51.568731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.568764   69234 main.go:141] libmachine: (embed-certs-245911) DBG | skip adding static IP to network mk-embed-certs-245911 - found existing host DHCP lease matching {name: "embed-certs-245911", mac: "52:54:00:bd:42:a3", ip: "192.168.39.158"}
	I0927 01:40:51.568781   69234 main.go:141] libmachine: (embed-certs-245911) Reserved static IP address: 192.168.39.158
	I0927 01:40:51.568798   69234 main.go:141] libmachine: (embed-certs-245911) Waiting for SSH to be available...
	I0927 01:40:51.568806   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Getting to WaitForSSH function...
	I0927 01:40:51.570819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571139   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.571167   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.571321   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH client type: external
	I0927 01:40:51.571370   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa (-rw-------)
	I0927 01:40:51.571401   69234 main.go:141] libmachine: (embed-certs-245911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:40:51.571414   69234 main.go:141] libmachine: (embed-certs-245911) DBG | About to run SSH command:
	I0927 01:40:51.571422   69234 main.go:141] libmachine: (embed-certs-245911) DBG | exit 0
	I0927 01:40:51.691525   69234 main.go:141] libmachine: (embed-certs-245911) DBG | SSH cmd err, output: <nil>: 
	I0927 01:40:51.691953   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetConfigRaw
	I0927 01:40:51.692573   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:51.695121   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695541   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.695572   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.695871   69234 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/config.json ...
	I0927 01:40:51.696087   69234 machine.go:93] provisionDockerMachine start ...
	I0927 01:40:51.696109   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:51.696312   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.698740   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699086   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.699112   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.699229   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.699415   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699552   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.699679   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.699810   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.699998   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.700011   69234 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:40:51.799534   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:40:51.799559   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799764   69234 buildroot.go:166] provisioning hostname "embed-certs-245911"
	I0927 01:40:51.799792   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:51.799987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.802464   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802819   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.802844   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.802960   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.803131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803290   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.803502   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.803672   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.803868   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.803888   69234 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-245911 && echo "embed-certs-245911" | sudo tee /etc/hostname
	I0927 01:40:51.917988   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-245911
	
	I0927 01:40:51.918019   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:51.920484   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.920800   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:51.920831   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:51.921041   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:51.921224   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921383   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:51.921511   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:51.921693   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:51.921883   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:51.921901   69234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-245911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-245911/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-245911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:40:52.028582   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:40:52.028609   69234 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:40:52.028672   69234 buildroot.go:174] setting up certificates
	I0927 01:40:52.028686   69234 provision.go:84] configureAuth start
	I0927 01:40:52.028704   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetMachineName
	I0927 01:40:52.029001   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.031742   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032088   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.032117   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.032273   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.034392   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034733   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.034754   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.034905   69234 provision.go:143] copyHostCerts
	I0927 01:40:52.034956   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:40:52.034969   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:40:52.035042   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:40:52.035172   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:40:52.035185   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:40:52.035224   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:40:52.035319   69234 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:40:52.035329   69234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:40:52.035363   69234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:40:52.035433   69234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.embed-certs-245911 san=[127.0.0.1 192.168.39.158 embed-certs-245911 localhost minikube]
	I0927 01:40:52.206591   69234 provision.go:177] copyRemoteCerts
	I0927 01:40:52.206657   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:40:52.206724   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.209445   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209770   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.209792   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.209995   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.210234   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.210416   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.210578   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.290176   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:40:52.313645   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:40:52.336446   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:40:52.359182   69234 provision.go:87] duration metric: took 330.481958ms to configureAuth
	I0927 01:40:52.359214   69234 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:40:52.359464   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:40:52.359551   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.362163   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362488   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.362513   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.362670   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.362826   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.362976   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.363133   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.363334   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.363532   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.363553   69234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:40:52.574326   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:40:52.574354   69234 machine.go:96] duration metric: took 878.253718ms to provisionDockerMachine
	I0927 01:40:52.574368   69234 start.go:293] postStartSetup for "embed-certs-245911" (driver="kvm2")
	I0927 01:40:52.574381   69234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:40:52.574398   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.574688   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:40:52.574714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.577727   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578035   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.578060   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.578227   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.578411   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.578555   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.578735   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.658636   69234 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:40:52.663048   69234 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:40:52.663077   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:40:52.663147   69234 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:40:52.663223   69234 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:40:52.663322   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:40:52.673347   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:52.697092   69234 start.go:296] duration metric: took 122.71069ms for postStartSetup
	I0927 01:40:52.697126   69234 fix.go:56] duration metric: took 20.212915975s for fixHost
	I0927 01:40:52.697145   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.699817   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700173   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.700202   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.700364   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.700558   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700735   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.700921   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.701097   69234 main.go:141] libmachine: Using SSH client type: native
	I0927 01:40:52.701269   69234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0927 01:40:52.701285   69234 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:40:52.800080   69234 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401252.775762391
	
	I0927 01:40:52.800102   69234 fix.go:216] guest clock: 1727401252.775762391
	I0927 01:40:52.800111   69234 fix.go:229] Guest: 2024-09-27 01:40:52.775762391 +0000 UTC Remote: 2024-09-27 01:40:52.697129165 +0000 UTC m=+207.179045808 (delta=78.633226ms)
	I0927 01:40:52.800145   69234 fix.go:200] guest clock delta is within tolerance: 78.633226ms
	I0927 01:40:52.800152   69234 start.go:83] releasing machines lock for "embed-certs-245911", held for 20.315972034s
	I0927 01:40:52.800183   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.800495   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:52.803196   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803657   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.803700   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.803874   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804419   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804610   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:40:52.804731   69234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:40:52.804771   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.804813   69234 ssh_runner.go:195] Run: cat /version.json
	I0927 01:40:52.804837   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:40:52.807320   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807346   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807680   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807731   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:52.807759   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807807   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:52.807916   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808070   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:40:52.808150   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808262   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:40:52.808331   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808384   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:40:52.808468   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.808522   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:40:52.908963   69234 ssh_runner.go:195] Run: systemctl --version
	I0927 01:40:52.915158   69234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:40:53.067605   69234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:40:53.074171   69234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:40:53.074241   69234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:40:53.091718   69234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:40:53.091742   69234 start.go:495] detecting cgroup driver to use...
	I0927 01:40:53.091813   69234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:40:53.108730   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:40:53.122920   69234 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:40:53.122984   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:40:53.137487   69234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:40:53.152420   69234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:40:53.269491   69234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:40:53.417893   69234 docker.go:233] disabling docker service ...
	I0927 01:40:53.417951   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:40:53.442201   69234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:40:53.459920   69234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:40:53.589768   69234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:40:53.719203   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:40:53.733145   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:40:53.751853   69234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:40:53.751919   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.763230   69234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:40:53.763294   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.774864   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.786149   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.797167   69234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:40:53.808495   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.819285   69234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.838497   69234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:40:53.850490   69234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:40:53.860309   69234 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:40:53.860377   69234 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:40:53.875533   69234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:40:53.885752   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:54.014352   69234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:40:54.107866   69234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:40:54.107926   69234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:40:54.113206   69234 start.go:563] Will wait 60s for crictl version
	I0927 01:40:54.113256   69234 ssh_runner.go:195] Run: which crictl
	I0927 01:40:54.117229   69234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:40:54.156365   69234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:40:54.156459   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.183974   69234 ssh_runner.go:195] Run: crio --version
	I0927 01:40:54.214440   69234 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:40:54.215714   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetIP
	I0927 01:40:54.218624   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.218975   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:40:54.219013   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:40:54.219180   69234 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 01:40:54.223450   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:54.236761   69234 kubeadm.go:883] updating cluster {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:40:54.236923   69234 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:40:54.236989   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:54.276635   69234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:40:54.276708   69234 ssh_runner.go:195] Run: which lz4
	I0927 01:40:54.281055   69234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:40:54.285439   69234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:40:54.285472   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:40:52.824650   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .Start
	I0927 01:40:52.824802   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring networks are active...
	I0927 01:40:52.825590   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network default is active
	I0927 01:40:52.825908   69333 main.go:141] libmachine: (old-k8s-version-612261) Ensuring network mk-old-k8s-version-612261 is active
	I0927 01:40:52.826326   69333 main.go:141] libmachine: (old-k8s-version-612261) Getting domain xml...
	I0927 01:40:52.827108   69333 main.go:141] libmachine: (old-k8s-version-612261) Creating domain...
	I0927 01:40:54.071322   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting to get IP...
	I0927 01:40:54.072357   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.072756   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.072821   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.072738   70279 retry.go:31] will retry after 264.648837ms: waiting for machine to come up
	I0927 01:40:54.339366   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.339799   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.339827   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.339731   70279 retry.go:31] will retry after 343.432635ms: waiting for machine to come up
	I0927 01:40:54.685260   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:54.685746   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:54.685780   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:54.685714   70279 retry.go:31] will retry after 455.276623ms: waiting for machine to come up
	I0927 01:40:55.142206   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.142679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.142708   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.142637   70279 retry.go:31] will retry after 419.074502ms: waiting for machine to come up
	I0927 01:40:55.563324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:55.565342   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:55.565368   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:55.565287   70279 retry.go:31] will retry after 587.161471ms: waiting for machine to come up
	I0927 01:40:56.154584   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.155182   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.155220   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.155109   70279 retry.go:31] will retry after 782.426926ms: waiting for machine to come up
	I0927 01:40:56.938784   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:56.939201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:56.939228   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:56.939132   70279 retry.go:31] will retry after 781.231902ms: waiting for machine to come up
	I0927 01:40:55.723619   69234 crio.go:462] duration metric: took 1.442589436s to copy over tarball
	I0927 01:40:55.723705   69234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:40:57.775673   69234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051936146s)
	I0927 01:40:57.775697   69234 crio.go:469] duration metric: took 2.052045538s to extract the tarball
	I0927 01:40:57.775704   69234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:40:57.812769   69234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:40:57.853219   69234 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:40:57.853240   69234 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:40:57.853248   69234 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0927 01:40:57.853354   69234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-245911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:40:57.853495   69234 ssh_runner.go:195] Run: crio config
	I0927 01:40:57.908273   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:40:57.908301   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:40:57.908322   69234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:40:57.908356   69234 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-245911 NodeName:embed-certs-245911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:40:57.908542   69234 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-245911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:40:57.908613   69234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:40:57.918923   69234 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:40:57.919021   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:40:57.928576   69234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0927 01:40:57.945515   69234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:40:57.962239   69234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0927 01:40:57.979722   69234 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0927 01:40:57.983709   69234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:40:57.996181   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:40:58.119502   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:40:58.137022   69234 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911 for IP: 192.168.39.158
	I0927 01:40:58.137048   69234 certs.go:194] generating shared ca certs ...
	I0927 01:40:58.137068   69234 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:40:58.137250   69234 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:40:58.137312   69234 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:40:58.137324   69234 certs.go:256] generating profile certs ...
	I0927 01:40:58.137444   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/client.key
	I0927 01:40:58.137522   69234 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key.e289c840
	I0927 01:40:58.137574   69234 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key
	I0927 01:40:58.137731   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:40:58.137774   69234 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:40:58.137787   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:40:58.137819   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:40:58.137850   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:40:58.137883   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:40:58.137928   69234 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:40:58.138551   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:40:58.179399   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:40:58.211297   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:40:58.245549   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:40:58.276837   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0927 01:40:58.313750   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:40:58.338145   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:40:58.361373   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/embed-certs-245911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:40:58.384790   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:40:58.407617   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:40:58.430621   69234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:40:58.453382   69234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:40:58.470177   69234 ssh_runner.go:195] Run: openssl version
	I0927 01:40:58.476280   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:40:58.489039   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493726   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.493780   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:40:58.499856   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:40:58.511032   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:40:58.521694   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.525991   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.526031   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:40:58.531619   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:40:58.542017   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:40:58.552591   69234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557047   69234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.557086   69234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:40:58.562874   69234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:40:58.574052   69234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:40:58.578537   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:40:58.584323   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:40:58.590033   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:40:58.596013   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:40:58.601572   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:40:58.606980   69234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:40:58.612554   69234 kubeadm.go:392] StartCluster: {Name:embed-certs-245911 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-245911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:40:58.612648   69234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:40:58.612704   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.649228   69234 cri.go:89] found id: ""
	I0927 01:40:58.649306   69234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:40:58.661599   69234 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:40:58.661628   69234 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:40:58.661688   69234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:40:58.671907   69234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:40:58.672851   69234 kubeconfig.go:125] found "embed-certs-245911" server: "https://192.168.39.158:8443"
	I0927 01:40:58.674753   69234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:40:58.684614   69234 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.158
	I0927 01:40:58.684643   69234 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:40:58.684652   69234 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:40:58.684715   69234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:40:58.726714   69234 cri.go:89] found id: ""
	I0927 01:40:58.726816   69234 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:40:58.743675   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:40:58.753456   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:40:58.753485   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:40:58.753535   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:40:58.762724   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:40:58.762821   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:40:58.772558   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:40:58.781732   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:40:58.781790   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:40:58.791109   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.800066   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:40:58.800127   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:40:58.809338   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:40:58.818214   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:40:58.818260   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:40:58.828049   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:40:58.837606   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:58.942395   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.758951   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:40:59.966377   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.036702   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:00.126663   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:00.126743   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:40:57.722147   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:57.722637   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:57.722657   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:57.722593   70279 retry.go:31] will retry after 1.223133601s: waiting for machine to come up
	I0927 01:40:58.947836   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:40:58.948362   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:40:58.948388   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:40:58.948326   70279 retry.go:31] will retry after 1.155368003s: waiting for machine to come up
	I0927 01:41:00.105812   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:00.106288   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:00.106356   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:00.106280   70279 retry.go:31] will retry after 2.324904017s: waiting for machine to come up
	I0927 01:41:00.627542   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.126971   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:01.626940   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.127478   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:02.176746   69234 api_server.go:72] duration metric: took 2.050081672s to wait for apiserver process to appear ...
	I0927 01:41:02.176775   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:02.176798   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:02.177442   69234 api_server.go:269] stopped: https://192.168.39.158:8443/healthz: Get "https://192.168.39.158:8443/healthz": dial tcp 192.168.39.158:8443: connect: connection refused
	I0927 01:41:02.677488   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.824718   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.824748   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:04.824763   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:04.850790   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:04.850820   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:05.177167   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.201660   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.201696   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:02.432597   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:02.433066   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:02.433096   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:02.433026   70279 retry.go:31] will retry after 2.598889471s: waiting for machine to come up
	I0927 01:41:05.034614   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:05.035001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:05.035023   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:05.034973   70279 retry.go:31] will retry after 3.064943329s: waiting for machine to come up
	I0927 01:41:05.677514   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:05.683506   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:05.683543   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.177064   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.181304   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.181339   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:06.676872   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:06.681269   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:06.681297   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.176902   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.181397   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.181425   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:07.677457   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:07.682057   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:07.682087   69234 api_server.go:103] status: https://192.168.39.158:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:08.177696   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:41:08.181752   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:41:08.188257   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:08.188278   69234 api_server.go:131] duration metric: took 6.011495616s to wait for apiserver health ...
	I0927 01:41:08.188285   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:41:08.188291   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:08.190206   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:08.191584   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:08.202370   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:08.224843   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:08.234247   69234 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:08.234275   69234 system_pods.go:61] "coredns-7c65d6cfc9-f2vxv" [3eed941e-e943-490b-a0a8-d543cec18a89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:08.234284   69234 system_pods.go:61] "etcd-embed-certs-245911" [f88581ff-3747-4fe5-a4a2-6259c3b4554e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:08.234291   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [3f1efb25-6e30-4d5f-baba-3e98b6fe531e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:08.234298   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [a624fc8d-fbe3-4b63-8a88-5f8069b21095] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:08.234302   69234 system_pods.go:61] "kube-proxy-pjf8v" [a1b76e67-803a-43fe-bff6-a4b0ddc246a1] Running
	I0927 01:41:08.234309   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [0f7c146b-e2b7-4110-b010-f4599d0da410] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:08.234313   69234 system_pods.go:61] "metrics-server-6867b74b74-k8mdf" [6d1e68fb-5187-4bc6-abdb-44f598e351c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:08.234317   69234 system_pods.go:61] "storage-provisioner" [dc0a7806-bee8-4127-8218-b2e48fa8500b] Running
	I0927 01:41:08.234323   69234 system_pods.go:74] duration metric: took 9.462578ms to wait for pod list to return data ...
	I0927 01:41:08.234333   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:08.238433   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:08.238455   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:08.238468   69234 node_conditions.go:105] duration metric: took 4.128775ms to run NodePressure ...
	I0927 01:41:08.238483   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:08.502161   69234 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506267   69234 kubeadm.go:739] kubelet initialised
	I0927 01:41:08.506290   69234 kubeadm.go:740] duration metric: took 4.099692ms waiting for restarted kubelet to initialise ...
	I0927 01:41:08.506299   69234 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:08.510964   69234 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.515262   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515279   69234 pod_ready.go:82] duration metric: took 4.294632ms for pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.515286   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "coredns-7c65d6cfc9-f2vxv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.515298   69234 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.519627   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519641   69234 pod_ready.go:82] duration metric: took 4.313975ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.519648   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "etcd-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.519653   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.523152   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523165   69234 pod_ready.go:82] duration metric: took 3.50412ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.523177   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.523186   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.628811   69234 pod_ready.go:98] node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628847   69234 pod_ready.go:82] duration metric: took 105.648464ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	E0927 01:41:08.628859   69234 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-245911" hosting pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-245911" has status "Ready":"False"
	I0927 01:41:08.628868   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027358   69234 pod_ready.go:93] pod "kube-proxy-pjf8v" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:09.027383   69234 pod_ready.go:82] duration metric: took 398.507928ms for pod "kube-proxy-pjf8v" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:09.027393   69234 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:08.101834   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:08.102324   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | unable to find current IP address of domain old-k8s-version-612261 in network mk-old-k8s-version-612261
	I0927 01:41:08.102358   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | I0927 01:41:08.102283   70279 retry.go:31] will retry after 4.242138543s: waiting for machine to come up
	I0927 01:41:13.708458   69534 start.go:364] duration metric: took 3m25.271525685s to acquireMachinesLock for "default-k8s-diff-port-368295"
	I0927 01:41:13.708525   69534 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:13.708533   69534 fix.go:54] fixHost starting: 
	I0927 01:41:13.708923   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:13.708979   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:13.726306   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0927 01:41:13.726732   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:13.727228   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:41:13.727252   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:13.727579   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:13.727781   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:13.727975   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:41:13.729621   69534 fix.go:112] recreateIfNeeded on default-k8s-diff-port-368295: state=Stopped err=<nil>
	I0927 01:41:13.729657   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	W0927 01:41:13.729826   69534 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:13.731730   69534 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-368295" ...
	I0927 01:41:12.347378   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.347831   69333 main.go:141] libmachine: (old-k8s-version-612261) Found IP for machine: 192.168.72.129
	I0927 01:41:12.347855   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserving static IP address...
	I0927 01:41:12.347872   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has current primary IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.348468   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.348494   69333 main.go:141] libmachine: (old-k8s-version-612261) Reserved static IP address: 192.168.72.129
	I0927 01:41:12.348507   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | skip adding static IP to network mk-old-k8s-version-612261 - found existing host DHCP lease matching {name: "old-k8s-version-612261", mac: "52:54:00:f1:a6:2e", ip: "192.168.72.129"}
	I0927 01:41:12.348518   69333 main.go:141] libmachine: (old-k8s-version-612261) Waiting for SSH to be available...
	I0927 01:41:12.348537   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Getting to WaitForSSH function...
	I0927 01:41:12.350917   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351287   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.351335   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.351464   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH client type: external
	I0927 01:41:12.351485   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa (-rw-------)
	I0927 01:41:12.351516   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:12.351525   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | About to run SSH command:
	I0927 01:41:12.351533   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | exit 0
	I0927 01:41:12.471347   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:12.471724   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetConfigRaw
	I0927 01:41:12.472352   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.474886   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475299   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.475340   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.475628   69333 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/config.json ...
	I0927 01:41:12.475857   69333 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:12.475879   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:12.476115   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.478594   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.478918   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.478945   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.479126   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.479340   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479536   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.479695   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.479859   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.480093   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.480116   69333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:12.579536   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:12.579562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579785   69333 buildroot.go:166] provisioning hostname "old-k8s-version-612261"
	I0927 01:41:12.579798   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.579965   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.582679   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583001   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.583027   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.583166   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.583372   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583562   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.583727   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.583924   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.584169   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.584187   69333 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-612261 && echo "old-k8s-version-612261" | sudo tee /etc/hostname
	I0927 01:41:12.702223   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-612261
	
	I0927 01:41:12.702252   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.705201   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705564   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.705601   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.705817   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:12.706012   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706154   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:12.706344   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:12.706538   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:12.706720   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:12.706738   69333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-612261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-612261/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-612261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:12.816316   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:12.816343   69333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:12.816376   69333 buildroot.go:174] setting up certificates
	I0927 01:41:12.816386   69333 provision.go:84] configureAuth start
	I0927 01:41:12.816394   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetMachineName
	I0927 01:41:12.816678   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:12.819190   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819487   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.819511   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.819696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:12.821843   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822166   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:12.822203   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:12.822382   69333 provision.go:143] copyHostCerts
	I0927 01:41:12.822453   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:12.822466   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:12.822533   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:12.822641   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:12.822650   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:12.822682   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:12.822756   69333 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:12.822766   69333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:12.822792   69333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:12.822859   69333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-612261 san=[127.0.0.1 192.168.72.129 localhost minikube old-k8s-version-612261]
	I0927 01:41:13.054632   69333 provision.go:177] copyRemoteCerts
	I0927 01:41:13.054706   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:13.054740   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.057895   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058296   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.058329   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.058478   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.058696   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.058907   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.059062   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.146378   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:13.176435   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:41:13.208974   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 01:41:13.240179   69333 provision.go:87] duration metric: took 423.77487ms to configureAuth
	I0927 01:41:13.240211   69333 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:13.240412   69333 config.go:182] Loaded profile config "old-k8s-version-612261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:41:13.240498   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.243514   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.243963   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.243991   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.244174   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.244419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244641   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.244838   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.245039   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.245263   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.245284   69333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:13.476519   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:13.476545   69333 machine.go:96] duration metric: took 1.000674334s to provisionDockerMachine
	I0927 01:41:13.476558   69333 start.go:293] postStartSetup for "old-k8s-version-612261" (driver="kvm2")
	I0927 01:41:13.476574   69333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:13.476593   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.476914   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:13.476942   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.479326   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479662   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.479686   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.479835   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.480027   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.480182   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.480337   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.563321   69333 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:13.567844   69333 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:13.567867   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:13.567929   69333 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:13.568012   69333 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:13.568109   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:13.578453   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:13.603888   69333 start.go:296] duration metric: took 127.316429ms for postStartSetup
	I0927 01:41:13.603924   69333 fix.go:56] duration metric: took 20.803606957s for fixHost
	I0927 01:41:13.603948   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.606500   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.606921   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.606949   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.607189   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.607419   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607600   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.607726   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.608048   69333 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:13.608234   69333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I0927 01:41:13.608245   69333 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:13.708261   69333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401273.683707076
	
	I0927 01:41:13.708284   69333 fix.go:216] guest clock: 1727401273.683707076
	I0927 01:41:13.708293   69333 fix.go:229] Guest: 2024-09-27 01:41:13.683707076 +0000 UTC Remote: 2024-09-27 01:41:13.603929237 +0000 UTC m=+226.291347697 (delta=79.777839ms)
	I0927 01:41:13.708348   69333 fix.go:200] guest clock delta is within tolerance: 79.777839ms
	I0927 01:41:13.708357   69333 start.go:83] releasing machines lock for "old-k8s-version-612261", held for 20.90807118s
	I0927 01:41:13.708392   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.708665   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:13.711474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.711873   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.711905   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.712035   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712569   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712748   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .DriverName
	I0927 01:41:13.712832   69333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:13.712878   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.712949   69333 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:13.712971   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHHostname
	I0927 01:41:13.715681   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.715820   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716024   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716043   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716200   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:13.716225   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:13.716235   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716370   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHPort
	I0927 01:41:13.716487   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716548   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHKeyPath
	I0927 01:41:13.716622   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716728   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetSSHUsername
	I0927 01:41:13.716779   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.716859   69333 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/old-k8s-version-612261/id_rsa Username:docker}
	I0927 01:41:13.826638   69333 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:13.832901   69333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:13.986132   69333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:13.992644   69333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:13.992728   69333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:14.008962   69333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:14.008991   69333 start.go:495] detecting cgroup driver to use...
	I0927 01:41:14.009051   69333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:14.025047   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:14.040807   69333 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:14.040857   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:14.055972   69333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:14.072654   69333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:14.210869   69333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:14.403536   69333 docker.go:233] disabling docker service ...
	I0927 01:41:14.403596   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:14.421549   69333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:14.436288   69333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:14.569634   69333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:14.701517   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:14.716794   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:14.740622   69333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:41:14.740685   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.756563   69333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:14.756626   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.768952   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.781314   69333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:14.793578   69333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:14.806302   69333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:14.822967   69333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:14.823036   69333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:14.837673   69333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:14.848486   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:14.988181   69333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:15.100581   69333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:15.100664   69333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:15.105816   69333 start.go:563] Will wait 60s for crictl version
	I0927 01:41:15.105883   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:15.110375   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:15.154944   69333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:15.155039   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.188172   69333 ssh_runner.go:195] Run: crio --version
	I0927 01:41:15.220410   69333 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 01:41:11.033747   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:13.038930   69234 pod_ready.go:103] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:15.035610   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:15.035636   69234 pod_ready.go:82] duration metric: took 6.008237321s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.035645   69234 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:15.221508   69333 main.go:141] libmachine: (old-k8s-version-612261) Calling .GetIP
	I0927 01:41:15.224474   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.224855   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:a6:2e", ip: ""} in network mk-old-k8s-version-612261: {Iface:virbr1 ExpiryTime:2024-09-27 02:41:04 +0000 UTC Type:0 Mac:52:54:00:f1:a6:2e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:old-k8s-version-612261 Clientid:01:52:54:00:f1:a6:2e}
	I0927 01:41:15.224884   69333 main.go:141] libmachine: (old-k8s-version-612261) DBG | domain old-k8s-version-612261 has defined IP address 192.168.72.129 and MAC address 52:54:00:f1:a6:2e in network mk-old-k8s-version-612261
	I0927 01:41:15.225126   69333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:15.229555   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:15.244862   69333 kubeadm.go:883] updating cluster {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:15.245007   69333 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:41:15.245070   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:15.298422   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:15.298501   69333 ssh_runner.go:195] Run: which lz4
	I0927 01:41:15.302771   69333 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:15.307360   69333 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:15.307398   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 01:41:17.053272   69333 crio.go:462] duration metric: took 1.750548806s to copy over tarball
	I0927 01:41:17.053354   69333 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:13.732810   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Start
	I0927 01:41:13.732979   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring networks are active...
	I0927 01:41:13.733749   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network default is active
	I0927 01:41:13.734076   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Ensuring network mk-default-k8s-diff-port-368295 is active
	I0927 01:41:13.734425   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Getting domain xml...
	I0927 01:41:13.734997   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Creating domain...
	I0927 01:41:15.073415   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting to get IP...
	I0927 01:41:15.074278   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074774   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.074850   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.074757   70444 retry.go:31] will retry after 231.356774ms: waiting for machine to come up
	I0927 01:41:15.308474   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.309058   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.308989   70444 retry.go:31] will retry after 252.762152ms: waiting for machine to come up
	I0927 01:41:15.563638   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564173   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.564212   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.564130   70444 retry.go:31] will retry after 341.067908ms: waiting for machine to come up
	I0927 01:41:15.906735   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907138   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:15.907168   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:15.907091   70444 retry.go:31] will retry after 385.816363ms: waiting for machine to come up
	I0927 01:41:16.294523   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295246   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.295268   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.295192   70444 retry.go:31] will retry after 575.812339ms: waiting for machine to come up
	I0927 01:41:16.873050   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873574   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:16.873601   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:16.873520   70444 retry.go:31] will retry after 661.914855ms: waiting for machine to come up
	I0927 01:41:17.537039   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537516   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:17.537544   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:17.537467   70444 retry.go:31] will retry after 959.195147ms: waiting for machine to come up
	I0927 01:41:17.043983   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.543159   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:20.066231   69333 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012846531s)
	I0927 01:41:20.066257   69333 crio.go:469] duration metric: took 3.012954388s to extract the tarball
	I0927 01:41:20.066265   69333 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:20.112486   69333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:20.152620   69333 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 01:41:20.152647   69333 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:20.152723   69333 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.152754   69333 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.152789   69333 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.152813   69333 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.152816   69333 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.152763   69333 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.152938   69333 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 01:41:20.152940   69333 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154747   69333 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.154752   69333 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.154886   69333 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.154914   69333 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.154925   69333 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.154930   69333 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.154934   69333 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 01:41:20.316172   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.316352   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 01:41:20.319986   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.331224   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.342010   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.355732   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.355739   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.446420   69333 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 01:41:20.446477   69333 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.446529   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.469134   69333 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 01:41:20.469183   69333 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.469231   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.470229   69333 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 01:41:20.470264   69333 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 01:41:20.470310   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.477952   69333 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 01:41:20.477991   69333 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.478034   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.519340   69333 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 01:41:20.519391   69333 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.519454   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538237   69333 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 01:41:20.538256   69333 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 01:41:20.538293   69333 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.538298   69333 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.538338   69333 ssh_runner.go:195] Run: which crictl
	I0927 01:41:20.538343   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.538389   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.538438   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.538489   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656448   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.656508   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.656542   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.656573   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.656635   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.656704   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.656740   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.818479   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.818494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 01:41:20.818581   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 01:41:20.878325   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 01:41:20.878480   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 01:41:20.878494   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:20.878585   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 01:41:20.885061   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 01:41:20.885168   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 01:41:20.898628   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 01:41:20.994147   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 01:41:20.994175   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 01:41:20.994211   69333 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 01:41:21.016210   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 01:41:21.016289   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 01:41:21.035051   69333 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 01:41:21.374949   69333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:21.520726   69333 cache_images.go:92] duration metric: took 1.368058485s to LoadCachedImages
	W0927 01:41:21.520817   69333 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0927 01:41:21.520833   69333 kubeadm.go:934] updating node { 192.168.72.129 8443 v1.20.0 crio true true} ...
	I0927 01:41:21.520951   69333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-612261 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:21.521035   69333 ssh_runner.go:195] Run: crio config
	I0927 01:41:21.571651   69333 cni.go:84] Creating CNI manager for ""
	I0927 01:41:21.571677   69333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:21.571688   69333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:21.571712   69333 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-612261 NodeName:old-k8s-version-612261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:41:21.571882   69333 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-612261"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:21.571958   69333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:41:21.582735   69333 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:21.582802   69333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:21.593329   69333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0927 01:41:21.615040   69333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:21.636564   69333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 01:41:21.657275   69333 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:21.661675   69333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:21.674587   69333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:21.814300   69333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:21.834133   69333 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261 for IP: 192.168.72.129
	I0927 01:41:21.834163   69333 certs.go:194] generating shared ca certs ...
	I0927 01:41:21.834182   69333 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:21.834380   69333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:21.834437   69333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:21.834450   69333 certs.go:256] generating profile certs ...
	I0927 01:41:21.834558   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.key
	I0927 01:41:21.834630   69333 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key.a362196e
	I0927 01:41:21.834676   69333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key
	I0927 01:41:21.834819   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:21.834859   69333 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:21.834873   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:21.834904   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:21.834937   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:21.834973   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:21.835023   69333 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:21.835864   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:21.866955   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:21.902991   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:21.928957   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:21.957505   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:41:21.984055   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:22.013191   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:22.041745   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 01:41:22.069680   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:22.104139   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:22.130348   69333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:22.157976   69333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:22.177818   69333 ssh_runner.go:195] Run: openssl version
	I0927 01:41:22.184389   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:22.196133   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201047   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.201120   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:22.207245   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:22.219033   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:22.230331   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235000   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.235054   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:22.240963   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:22.252022   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:22.263197   69333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268023   69333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.268100   69333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:22.274086   69333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:22.285387   69333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:22.290487   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:22.296953   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:22.303095   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:22.310001   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:22.316346   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:22.322559   69333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:22.328931   69333 kubeadm.go:392] StartCluster: {Name:old-k8s-version-612261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-612261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:22.329015   69333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:22.329081   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:18.498695   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499234   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:18.499261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:18.499187   70444 retry.go:31] will retry after 932.004828ms: waiting for machine to come up
	I0927 01:41:19.432487   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432885   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:19.432912   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:19.432844   70444 retry.go:31] will retry after 1.595543978s: waiting for machine to come up
	I0927 01:41:21.030048   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030572   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:21.030598   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:21.030526   70444 retry.go:31] will retry after 1.93010855s: waiting for machine to come up
	I0927 01:41:22.963833   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964303   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:22.964334   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:22.964254   70444 retry.go:31] will retry after 2.81720725s: waiting for machine to come up
	I0927 01:41:21.757497   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:24.043965   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:22.368989   69333 cri.go:89] found id: ""
	I0927 01:41:22.369059   69333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:22.379818   69333 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:22.379841   69333 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:22.379897   69333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:22.392278   69333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:22.393236   69333 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-612261" does not appear in /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:41:22.393856   69333 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-14935/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-612261" cluster setting kubeconfig missing "old-k8s-version-612261" context setting]
	I0927 01:41:22.394733   69333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:22.404625   69333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:22.415376   69333 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.129
	I0927 01:41:22.415414   69333 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:22.415427   69333 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:22.415487   69333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:22.452749   69333 cri.go:89] found id: ""
	I0927 01:41:22.452829   69333 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:22.469164   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:22.480018   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:22.480038   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:22.480092   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:41:22.490501   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:22.490562   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:22.500330   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:41:22.509612   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:22.509681   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:22.520064   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.529864   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:22.529921   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:22.540563   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:41:22.556739   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:22.556797   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:22.572858   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:22.583366   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:22.709007   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.468461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.714890   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.865174   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:23.959048   69333 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:23.959140   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.460104   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:24.959462   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.460143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.959473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.460051   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:26.960121   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:25.784030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784429   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:25.784456   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:25.784393   70444 retry.go:31] will retry after 2.844872797s: waiting for machine to come up
	I0927 01:41:26.544176   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:29.042297   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:27.459491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:27.959944   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.459636   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.959766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.459410   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:29.959439   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.460176   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:30.959810   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.459492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:31.959966   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:28.632445   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632905   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | unable to find current IP address of domain default-k8s-diff-port-368295 in network mk-default-k8s-diff-port-368295
	I0927 01:41:28.632930   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | I0927 01:41:28.632866   70444 retry.go:31] will retry after 3.566248996s: waiting for machine to come up
	I0927 01:41:32.200424   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200804   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Found IP for machine: 192.168.61.83
	I0927 01:41:32.200832   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has current primary IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.200841   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserving static IP address...
	I0927 01:41:32.201137   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.201151   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Reserved static IP address: 192.168.61.83
	I0927 01:41:32.201164   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | skip adding static IP to network mk-default-k8s-diff-port-368295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-368295", mac: "52:54:00:a3:b6:7a", ip: "192.168.61.83"}
	I0927 01:41:32.201177   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Getting to WaitForSSH function...
	I0927 01:41:32.201185   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Waiting for SSH to be available...
	I0927 01:41:32.203258   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203542   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.203571   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.203674   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH client type: external
	I0927 01:41:32.203704   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa (-rw-------)
	I0927 01:41:32.203743   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:32.203763   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | About to run SSH command:
	I0927 01:41:32.203783   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | exit 0
	I0927 01:41:32.327131   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:32.327499   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetConfigRaw
	I0927 01:41:32.328140   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.330387   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.330769   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.330801   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.331054   69534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/config.json ...
	I0927 01:41:32.331257   69534 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:32.331279   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:32.331505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.333514   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333799   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.333825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.333940   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.334101   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334267   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.334359   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.334509   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.334700   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.334709   69534 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:32.439884   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:32.439921   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440126   69534 buildroot.go:166] provisioning hostname "default-k8s-diff-port-368295"
	I0927 01:41:32.440149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.440346   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.443385   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443707   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.443742   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.443917   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.444093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444266   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.444427   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.444606   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.444793   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.444809   69534 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-368295 && echo "default-k8s-diff-port-368295" | sudo tee /etc/hostname
	I0927 01:41:32.570447   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-368295
	
	I0927 01:41:32.570479   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.573194   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573472   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.573512   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.573699   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.573942   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574097   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.574261   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.574430   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:32.574623   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:32.574647   69534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-368295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-368295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-368295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:32.693082   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:32.693107   69534 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:32.693140   69534 buildroot.go:174] setting up certificates
	I0927 01:41:32.693149   69534 provision.go:84] configureAuth start
	I0927 01:41:32.693160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetMachineName
	I0927 01:41:32.693407   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:32.696156   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696498   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.696522   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.696693   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.698894   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699229   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.699257   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.699399   69534 provision.go:143] copyHostCerts
	I0927 01:41:32.699451   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:32.699464   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:32.699530   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:32.699639   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:32.699653   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:32.699681   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:32.699751   69534 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:32.699761   69534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:32.699785   69534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:32.699848   69534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-368295 san=[127.0.0.1 192.168.61.83 default-k8s-diff-port-368295 localhost minikube]
	I0927 01:41:32.887727   69534 provision.go:177] copyRemoteCerts
	I0927 01:41:32.887792   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:32.887825   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:32.890435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890768   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:32.890797   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:32.890956   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:32.891128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:32.891252   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:32.891373   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:32.973705   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:32.998434   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0927 01:41:33.023552   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:33.048884   69534 provision.go:87] duration metric: took 355.724209ms to configureAuth
	I0927 01:41:33.048910   69534 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:33.049080   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:33.049149   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.051738   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052080   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.052133   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.052364   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.052578   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052726   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.052844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.053031   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.053265   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.053283   69534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:33.292126   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:33.292148   69534 machine.go:96] duration metric: took 960.878234ms to provisionDockerMachine
	I0927 01:41:33.292159   69534 start.go:293] postStartSetup for "default-k8s-diff-port-368295" (driver="kvm2")
	I0927 01:41:33.292171   69534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:33.292188   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.292511   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:33.292539   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.295356   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.295759   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.295936   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.296100   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.296314   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.296498   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.528391   68676 start.go:364] duration metric: took 56.042651871s to acquireMachinesLock for "no-preload-521072"
	I0927 01:41:33.528435   68676 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:41:33.528445   68676 fix.go:54] fixHost starting: 
	I0927 01:41:33.528858   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:41:33.528890   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:41:33.547391   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0927 01:41:33.547852   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:41:33.548343   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:41:33.548371   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:41:33.548713   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:41:33.548907   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:33.549064   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:41:33.550898   68676 fix.go:112] recreateIfNeeded on no-preload-521072: state=Stopped err=<nil>
	I0927 01:41:33.550923   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	W0927 01:41:33.551084   68676 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:41:33.553090   68676 out.go:177] * Restarting existing kvm2 VM for "no-preload-521072" ...
	I0927 01:41:33.554429   68676 main.go:141] libmachine: (no-preload-521072) Calling .Start
	I0927 01:41:33.554613   68676 main.go:141] libmachine: (no-preload-521072) Ensuring networks are active...
	I0927 01:41:33.555401   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network default is active
	I0927 01:41:33.555858   68676 main.go:141] libmachine: (no-preload-521072) Ensuring network mk-no-preload-521072 is active
	I0927 01:41:33.556350   68676 main.go:141] libmachine: (no-preload-521072) Getting domain xml...
	I0927 01:41:33.557057   68676 main.go:141] libmachine: (no-preload-521072) Creating domain...
	I0927 01:41:34.830052   68676 main.go:141] libmachine: (no-preload-521072) Waiting to get IP...
	I0927 01:41:34.830807   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:34.831255   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:34.831340   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:34.831244   70637 retry.go:31] will retry after 267.615794ms: waiting for machine to come up
	I0927 01:41:33.378613   69534 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:33.383491   69534 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:33.383517   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:33.383590   69534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:33.383695   69534 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:33.383810   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:33.395134   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:33.420441   69534 start.go:296] duration metric: took 128.270045ms for postStartSetup
	I0927 01:41:33.420481   69534 fix.go:56] duration metric: took 19.711948387s for fixHost
	I0927 01:41:33.420505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.422860   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423170   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.423198   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.423333   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.423517   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423676   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.423820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.423987   69534 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:33.424139   69534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.83 22 <nil> <nil>}
	I0927 01:41:33.424153   69534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:33.528250   69534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401293.484458762
	
	I0927 01:41:33.528271   69534 fix.go:216] guest clock: 1727401293.484458762
	I0927 01:41:33.528278   69534 fix.go:229] Guest: 2024-09-27 01:41:33.484458762 +0000 UTC Remote: 2024-09-27 01:41:33.420486926 +0000 UTC m=+225.118319167 (delta=63.971836ms)
	I0927 01:41:33.528297   69534 fix.go:200] guest clock delta is within tolerance: 63.971836ms
	I0927 01:41:33.528303   69534 start.go:83] releasing machines lock for "default-k8s-diff-port-368295", held for 19.819799777s
	I0927 01:41:33.528328   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.528623   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:33.531282   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531692   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.531724   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.531914   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532476   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532651   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:41:33.532742   69534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:33.532784   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.532868   69534 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:33.532890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:41:33.535432   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535710   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.535820   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.535843   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536030   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536128   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:33.536153   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:33.536195   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536351   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:41:33.536367   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536513   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:41:33.536508   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.536634   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:41:33.536815   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:41:33.644679   69534 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:33.652386   69534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:33.803821   69534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:33.810620   69534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:33.810678   69534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:33.826938   69534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:33.826963   69534 start.go:495] detecting cgroup driver to use...
	I0927 01:41:33.827028   69534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:33.844572   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:33.859851   69534 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:33.859916   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:33.874262   69534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:33.888460   69534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:34.011008   69534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:34.161761   69534 docker.go:233] disabling docker service ...
	I0927 01:41:34.161855   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:34.180621   69534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:34.198472   69534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:34.340892   69534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:34.483708   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:34.498745   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:34.518957   69534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:34.519026   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.530123   69534 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:34.530172   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.545035   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.555944   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.566852   69534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:34.577676   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.589078   69534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.608131   69534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:34.619482   69534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:34.629119   69534 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:34.629180   69534 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:34.643997   69534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:34.656396   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:34.791856   69534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:34.884774   69534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:34.884831   69534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:34.889590   69534 start.go:563] Will wait 60s for crictl version
	I0927 01:41:34.889633   69534 ssh_runner.go:195] Run: which crictl
	I0927 01:41:34.893330   69534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:34.930031   69534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:34.930141   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.960912   69534 ssh_runner.go:195] Run: crio --version
	I0927 01:41:34.996060   69534 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:31.542525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.546389   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:32.459727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:32.959527   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.459351   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:33.959903   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.459444   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.959423   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.459435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:35.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.460148   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:36.959874   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:34.997457   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetIP
	I0927 01:41:35.000691   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001081   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:41:35.001127   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:41:35.001322   69534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:35.006115   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:35.019817   69534 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:35.019983   69534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:35.020045   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:35.062533   69534 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:35.062595   69534 ssh_runner.go:195] Run: which lz4
	I0927 01:41:35.066897   69534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 01:41:35.071178   69534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 01:41:35.071216   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 01:41:36.563774   69534 crio.go:462] duration metric: took 1.496913722s to copy over tarball
	I0927 01:41:36.563866   69534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 01:41:35.100818   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.101327   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.101354   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.101290   70637 retry.go:31] will retry after 244.193758ms: waiting for machine to come up
	I0927 01:41:35.347021   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.347674   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.347714   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.347650   70637 retry.go:31] will retry after 361.672884ms: waiting for machine to come up
	I0927 01:41:35.711206   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:35.711755   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:35.711788   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:35.711730   70637 retry.go:31] will retry after 406.084841ms: waiting for machine to come up
	I0927 01:41:36.119494   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.120026   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.120067   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.119978   70637 retry.go:31] will retry after 497.966133ms: waiting for machine to come up
	I0927 01:41:36.619859   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:36.620400   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:36.620428   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:36.620362   70637 retry.go:31] will retry after 765.975603ms: waiting for machine to come up
	I0927 01:41:37.387821   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:37.388502   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:37.388537   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:37.388453   70637 retry.go:31] will retry after 828.567445ms: waiting for machine to come up
	I0927 01:41:38.218462   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:38.218940   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:38.218974   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:38.218803   70637 retry.go:31] will retry after 1.269155563s: waiting for machine to come up
	I0927 01:41:39.489076   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:39.489557   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:39.489583   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:39.489514   70637 retry.go:31] will retry after 1.666481574s: waiting for machine to come up
	I0927 01:41:35.554859   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:38.043285   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:40.542499   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:37.459766   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:37.959594   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.459971   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.960093   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.459983   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:39.959812   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.460220   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:40.959253   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.459829   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.959864   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:38.667451   69534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10354947s)
	I0927 01:41:38.667477   69534 crio.go:469] duration metric: took 2.103669113s to extract the tarball
	I0927 01:41:38.667487   69534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 01:41:38.704217   69534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:38.747162   69534 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:41:38.747187   69534 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:41:38.747197   69534 kubeadm.go:934] updating node { 192.168.61.83 8444 v1.31.1 crio true true} ...
	I0927 01:41:38.747323   69534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-368295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:41:38.747406   69534 ssh_runner.go:195] Run: crio config
	I0927 01:41:38.796481   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:38.796510   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:38.796522   69534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:41:38.796549   69534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.83 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-368295 NodeName:default-k8s-diff-port-368295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:41:38.796726   69534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.83
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-368295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:41:38.796806   69534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:41:38.807445   69534 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:41:38.807513   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:41:38.817368   69534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0927 01:41:38.834181   69534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:41:38.851650   69534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0927 01:41:38.869822   69534 ssh_runner.go:195] Run: grep 192.168.61.83	control-plane.minikube.internal$ /etc/hosts
	I0927 01:41:38.873868   69534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:38.886422   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:39.022075   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:41:39.038948   69534 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295 for IP: 192.168.61.83
	I0927 01:41:39.038982   69534 certs.go:194] generating shared ca certs ...
	I0927 01:41:39.039004   69534 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:41:39.039174   69534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:41:39.039241   69534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:41:39.039253   69534 certs.go:256] generating profile certs ...
	I0927 01:41:39.039402   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.key
	I0927 01:41:39.039490   69534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key.2edc0267
	I0927 01:41:39.039549   69534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key
	I0927 01:41:39.039701   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:41:39.039773   69534 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:41:39.039789   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:41:39.039825   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:41:39.039860   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:41:39.039889   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:41:39.039950   69534 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:39.040814   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:41:39.080130   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:41:39.133365   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:41:39.169238   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:41:39.196619   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 01:41:39.227667   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:41:39.255240   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:41:39.280602   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:41:39.305695   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:41:39.329559   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:41:39.358555   69534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:41:39.387030   69534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:41:39.404111   69534 ssh_runner.go:195] Run: openssl version
	I0927 01:41:39.409879   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:41:39.420542   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425094   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.425151   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:41:39.431225   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:41:39.442237   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:41:39.453229   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458040   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.458110   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:41:39.464177   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:41:39.475582   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:41:39.486911   69534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491843   69534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.491898   69534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:41:39.497653   69534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:41:39.508039   69534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:41:39.512597   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:41:39.518557   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:41:39.524475   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:41:39.530616   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:41:39.536820   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:41:39.543487   69534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:41:39.549791   69534 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-368295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-368295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:41:39.549880   69534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:41:39.549945   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.594178   69534 cri.go:89] found id: ""
	I0927 01:41:39.594256   69534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:41:39.605173   69534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:41:39.605195   69534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:41:39.605261   69534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:41:39.615543   69534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:41:39.616639   69534 kubeconfig.go:125] found "default-k8s-diff-port-368295" server: "https://192.168.61.83:8444"
	I0927 01:41:39.618793   69534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:41:39.628422   69534 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.83
	I0927 01:41:39.628454   69534 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:41:39.628465   69534 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:41:39.628566   69534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:41:39.673513   69534 cri.go:89] found id: ""
	I0927 01:41:39.673592   69534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:41:39.690296   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:41:39.699800   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:41:39.699821   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:41:39.699876   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:41:39.709235   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:41:39.709294   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:41:39.719012   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:41:39.728197   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:41:39.728262   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:41:39.737520   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.746592   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:41:39.746653   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:41:39.756251   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:41:39.765026   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:41:39.765090   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:41:39.774937   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:41:39.784588   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:39.893259   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.625162   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:40.954926   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.025693   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:41.101915   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:41:41.102006   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:41.602856   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.102942   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.602371   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.620056   69534 api_server.go:72] duration metric: took 1.518136259s to wait for apiserver process to appear ...
	I0927 01:41:42.620085   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:41:42.620107   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:41.157254   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:41.157789   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:41.157817   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:41.157738   70637 retry.go:31] will retry after 1.495421187s: waiting for machine to come up
	I0927 01:41:42.655326   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:42.655826   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:42.655853   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:42.655771   70637 retry.go:31] will retry after 2.80191937s: waiting for machine to come up
	I0927 01:41:42.543732   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.043009   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.040496   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.040525   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.040542   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.079569   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.079602   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.120702   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.126461   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:41:45.126488   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:41:45.621130   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:45.629533   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:45.629569   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.121189   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.130806   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:41:46.130842   69534 api_server.go:103] status: https://192.168.61.83:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:41:46.620334   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:41:46.625456   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:41:46.636549   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:41:46.636581   69534 api_server.go:131] duration metric: took 4.016488114s to wait for apiserver health ...
	I0927 01:41:46.636591   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:41:46.636599   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:41:46.638016   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:41:42.459806   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:42.960200   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.459511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:43.959467   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.459352   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:44.960147   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.459637   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:45.959535   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.459585   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.959579   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:46.639222   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:41:46.651680   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:41:46.671366   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:41:46.684702   69534 system_pods.go:59] 8 kube-system pods found
	I0927 01:41:46.684740   69534 system_pods.go:61] "coredns-7c65d6cfc9-xtgdx" [6a5f97bd-0fbb-4220-a763-bb8ca6fab439] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:41:46.684752   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [2dbd4866-89f2-4a0c-ab8a-671ff0237bf3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:41:46.684761   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [62865280-e996-45a9-a872-766e09d5b91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:41:46.684774   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [b0d06bec-2f5a-46e4-9d2d-b2ea7cdc7968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:41:46.684781   69534 system_pods.go:61] "kube-proxy-xm2p8" [449495d5-a476-4abf-b6be-301b9ead92e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 01:41:46.684793   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [71dadb93-c535-4ce3-8dd7-ffd4496bf0e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:41:46.684801   69534 system_pods.go:61] "metrics-server-6867b74b74-n9nsg" [fefb6977-44af-41f8-8a82-1dcd76374ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:41:46.684811   69534 system_pods.go:61] "storage-provisioner" [78bd924c-1d70-4eb6-9e2c-0e21ebc523dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 01:41:46.684818   69534 system_pods.go:74] duration metric: took 13.431978ms to wait for pod list to return data ...
	I0927 01:41:46.684830   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:41:46.690309   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:41:46.690343   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:41:46.690358   69534 node_conditions.go:105] duration metric: took 5.522911ms to run NodePressure ...
	I0927 01:41:46.690379   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:41:46.964511   69534 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971731   69534 kubeadm.go:739] kubelet initialised
	I0927 01:41:46.971751   69534 kubeadm.go:740] duration metric: took 7.215476ms waiting for restarted kubelet to initialise ...
	I0927 01:41:46.971760   69534 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:41:46.978192   69534 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:45.459706   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:45.460242   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:45.460265   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:45.460161   70637 retry.go:31] will retry after 3.051133432s: waiting for machine to come up
	I0927 01:41:48.512758   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:48.513180   68676 main.go:141] libmachine: (no-preload-521072) DBG | unable to find current IP address of domain no-preload-521072 in network mk-no-preload-521072
	I0927 01:41:48.513208   68676 main.go:141] libmachine: (no-preload-521072) DBG | I0927 01:41:48.513118   70637 retry.go:31] will retry after 3.478053984s: waiting for machine to come up
	I0927 01:41:47.544064   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:50.042360   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.459645   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:47.959756   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.460088   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.959526   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:49.960102   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.460203   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:50.960225   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.460182   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:51.959343   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:48.985840   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:51.506449   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.484646   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:52.484672   69534 pod_ready.go:82] duration metric: took 5.506454681s for pod "coredns-7c65d6cfc9-xtgdx" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:52.484685   69534 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:51.994746   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995201   68676 main.go:141] libmachine: (no-preload-521072) Found IP for machine: 192.168.50.246
	I0927 01:41:51.995219   68676 main.go:141] libmachine: (no-preload-521072) Reserving static IP address...
	I0927 01:41:51.995230   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has current primary IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.995651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.995677   68676 main.go:141] libmachine: (no-preload-521072) Reserved static IP address: 192.168.50.246
	I0927 01:41:51.995695   68676 main.go:141] libmachine: (no-preload-521072) DBG | skip adding static IP to network mk-no-preload-521072 - found existing host DHCP lease matching {name: "no-preload-521072", mac: "52:54:00:85:27:74", ip: "192.168.50.246"}
	I0927 01:41:51.995713   68676 main.go:141] libmachine: (no-preload-521072) DBG | Getting to WaitForSSH function...
	I0927 01:41:51.995727   68676 main.go:141] libmachine: (no-preload-521072) Waiting for SSH to be available...
	I0927 01:41:51.998245   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998590   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:51.998616   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:51.998748   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH client type: external
	I0927 01:41:51.998810   68676 main.go:141] libmachine: (no-preload-521072) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa (-rw-------)
	I0927 01:41:51.998850   68676 main.go:141] libmachine: (no-preload-521072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 01:41:51.998866   68676 main.go:141] libmachine: (no-preload-521072) DBG | About to run SSH command:
	I0927 01:41:51.998877   68676 main.go:141] libmachine: (no-preload-521072) DBG | exit 0
	I0927 01:41:52.131754   68676 main.go:141] libmachine: (no-preload-521072) DBG | SSH cmd err, output: <nil>: 
	I0927 01:41:52.132117   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetConfigRaw
	I0927 01:41:52.132724   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.135236   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135588   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.135615   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.135866   68676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/config.json ...
	I0927 01:41:52.136059   68676 machine.go:93] provisionDockerMachine start ...
	I0927 01:41:52.136078   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:52.136300   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.138644   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139009   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.139035   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.139215   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.139406   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139602   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.139760   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.139931   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.140139   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.140151   68676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:41:52.255655   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 01:41:52.255690   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.255952   68676 buildroot.go:166] provisioning hostname "no-preload-521072"
	I0927 01:41:52.255968   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.256122   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.258599   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.258963   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.258994   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.259108   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.259322   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259494   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.259676   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.259835   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.260008   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.260023   68676 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-521072 && echo "no-preload-521072" | sudo tee /etc/hostname
	I0927 01:41:52.405255   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-521072
	
	I0927 01:41:52.405314   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.408593   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.408927   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.408973   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.409346   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.409591   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409786   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.409940   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.410094   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.410331   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.410356   68676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-521072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-521072/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-521072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:41:52.538244   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:41:52.538276   68676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14935/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14935/.minikube}
	I0927 01:41:52.538321   68676 buildroot.go:174] setting up certificates
	I0927 01:41:52.538335   68676 provision.go:84] configureAuth start
	I0927 01:41:52.538350   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetMachineName
	I0927 01:41:52.538644   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:52.541913   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542334   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.542372   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.542540   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.544773   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545127   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.545163   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.545357   68676 provision.go:143] copyHostCerts
	I0927 01:41:52.545415   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem, removing ...
	I0927 01:41:52.545427   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem
	I0927 01:41:52.545496   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/ca.pem (1078 bytes)
	I0927 01:41:52.545614   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem, removing ...
	I0927 01:41:52.545624   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem
	I0927 01:41:52.545655   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/cert.pem (1123 bytes)
	I0927 01:41:52.545732   68676 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem, removing ...
	I0927 01:41:52.545742   68676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem
	I0927 01:41:52.545768   68676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14935/.minikube/key.pem (1675 bytes)
	I0927 01:41:52.545834   68676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem org=jenkins.no-preload-521072 san=[127.0.0.1 192.168.50.246 localhost minikube no-preload-521072]
	I0927 01:41:52.738375   68676 provision.go:177] copyRemoteCerts
	I0927 01:41:52.738434   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:41:52.738459   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.741146   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741439   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.741456   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.741630   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.741828   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.741961   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.742086   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:52.830330   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:41:52.854664   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:41:52.879246   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:41:52.902734   68676 provision.go:87] duration metric: took 364.385528ms to configureAuth
	I0927 01:41:52.902782   68676 buildroot.go:189] setting minikube options for container-runtime
	I0927 01:41:52.903017   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:41:52.903109   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:52.906143   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906495   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:52.906526   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:52.906699   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:52.906917   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907086   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:52.907211   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:52.907426   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:52.907625   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:52.907640   68676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:41:53.162936   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:41:53.162960   68676 machine.go:96] duration metric: took 1.026891152s to provisionDockerMachine
	I0927 01:41:53.162971   68676 start.go:293] postStartSetup for "no-preload-521072" (driver="kvm2")
	I0927 01:41:53.162980   68676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:41:53.162994   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.163325   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:41:53.163360   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.166007   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166478   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.166516   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.166726   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.166919   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.167103   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.167253   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.254620   68676 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:41:53.259139   68676 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 01:41:53.259160   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/addons for local assets ...
	I0927 01:41:53.259236   68676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14935/.minikube/files for local assets ...
	I0927 01:41:53.259341   68676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem -> 221382.pem in /etc/ssl/certs
	I0927 01:41:53.259465   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:41:53.269711   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:41:53.294563   68676 start.go:296] duration metric: took 131.58032ms for postStartSetup
	I0927 01:41:53.294602   68676 fix.go:56] duration metric: took 19.766156729s for fixHost
	I0927 01:41:53.294626   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.297597   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.297897   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.297928   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.298092   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.298275   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.298632   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.298821   68676 main.go:141] libmachine: Using SSH client type: native
	I0927 01:41:53.298997   68676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0927 01:41:53.299010   68676 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 01:41:53.416459   68676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727401313.370238189
	
	I0927 01:41:53.416488   68676 fix.go:216] guest clock: 1727401313.370238189
	I0927 01:41:53.416497   68676 fix.go:229] Guest: 2024-09-27 01:41:53.370238189 +0000 UTC Remote: 2024-09-27 01:41:53.294607439 +0000 UTC m=+358.400757430 (delta=75.63075ms)
	I0927 01:41:53.416521   68676 fix.go:200] guest clock delta is within tolerance: 75.63075ms
	I0927 01:41:53.416542   68676 start.go:83] releasing machines lock for "no-preload-521072", held for 19.888127741s
	I0927 01:41:53.416581   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.416835   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:53.419800   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420124   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.420153   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.420309   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420730   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420905   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:41:53.420988   68676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:41:53.421036   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.421126   68676 ssh_runner.go:195] Run: cat /version.json
	I0927 01:41:53.421148   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:41:53.423529   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423882   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.423916   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.423937   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424023   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424308   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424365   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:53.424412   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:53.424464   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.424567   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:41:53.424701   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:41:53.424838   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:41:53.424990   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:41:53.527586   68676 ssh_runner.go:195] Run: systemctl --version
	I0927 01:41:53.533685   68676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:41:53.680850   68676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 01:41:53.686769   68676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 01:41:53.686831   68676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:41:53.702686   68676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 01:41:53.702709   68676 start.go:495] detecting cgroup driver to use...
	I0927 01:41:53.702787   68676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:41:53.720756   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:41:53.736843   68676 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:41:53.736920   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:41:53.752063   68676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:41:53.768140   68676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:41:53.890040   68676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:41:54.044033   68676 docker.go:233] disabling docker service ...
	I0927 01:41:54.044100   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:41:54.060061   68676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:41:54.073201   68676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:41:54.225559   68676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:41:54.367269   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:41:54.381517   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:41:54.401099   68676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:41:54.401164   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.412620   68676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:41:54.412687   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.425942   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.437451   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.449115   68676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:41:54.460383   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.471393   68676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.489649   68676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:41:54.500699   68676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:41:54.511012   68676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 01:41:54.511061   68676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 01:41:54.524738   68676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:41:54.535353   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:41:54.672416   68676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:41:54.763423   68676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:41:54.763506   68676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:41:54.768758   68676 start.go:563] Will wait 60s for crictl version
	I0927 01:41:54.768823   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:54.772980   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:41:54.814375   68676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 01:41:54.814460   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.844002   68676 ssh_runner.go:195] Run: crio --version
	I0927 01:41:54.876692   68676 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 01:41:54.877765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetIP
	I0927 01:41:54.880320   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.880817   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:41:54.880852   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:41:54.881008   68676 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 01:41:54.885225   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:41:54.897661   68676 kubeadm.go:883] updating cluster {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:41:54.897768   68676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:41:54.897810   68676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:41:52.542326   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.543472   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.459589   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:52.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.459448   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:53.960120   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.460016   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.959681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.459321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:55.959819   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.459221   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:56.959296   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:54.491390   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.997932   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.937979   68676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 01:41:54.938000   68676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 01:41:54.938055   68676 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.938103   68676 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.938124   68676 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.938101   68676 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.938180   68676 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:54.938069   68676 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0927 01:41:54.938088   68676 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939611   68676 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:54.939853   68676 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:54.939867   68676 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:54.939872   68676 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:54.939875   68676 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:54.939868   68676 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:54.939932   68676 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0927 01:41:54.939954   68676 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.100149   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.104432   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.122220   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0927 01:41:55.146745   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.148808   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.159749   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.194662   68676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0927 01:41:55.194710   68676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.194764   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.218262   68676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0927 01:41:55.218302   68676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.218348   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.275530   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339428   68676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0927 01:41:55.339476   68676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.339488   68676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0927 01:41:55.339526   68676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.339554   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339558   68676 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0927 01:41:55.339569   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339573   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.339584   68676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.339619   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.339625   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.339689   68676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0927 01:41:55.339733   68676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.339772   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:55.392986   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.393033   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.403596   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.403658   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.403601   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.404180   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.528983   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0927 01:41:55.529008   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.529013   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.556122   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0927 01:41:55.556146   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.559222   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.668914   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0927 01:41:55.669041   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.671951   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0927 01:41:55.672026   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0927 01:41:55.675810   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0927 01:41:55.675854   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 01:41:55.675883   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0927 01:41:55.675910   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:55.687199   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0927 01:41:55.687234   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.687294   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0927 01:41:55.766777   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0927 01:41:55.766775   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0927 01:41:55.766894   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:41:55.766901   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:41:55.776811   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0927 01:41:55.776824   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0927 01:41:55.776933   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0927 01:41:55.777033   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:55.776938   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:41:56.125882   68676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825382   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.048325373s)
	I0927 01:41:57.825460   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0927 01:41:57.825396   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.048309349s)
	I0927 01:41:57.825483   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0927 01:41:57.825401   68676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.699485021s)
	I0927 01:41:57.825517   68676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0927 01:41:57.825520   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.138185505s)
	I0927 01:41:57.825540   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0927 01:41:57.825548   68676 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:57.825411   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.058505151s)
	I0927 01:41:57.825566   68676 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:57.825573   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0927 01:41:57.825414   68676 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.058497946s)
	I0927 01:41:57.825584   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0927 01:41:57.825596   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:41:57.825613   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0927 01:41:59.788391   68676 ssh_runner.go:235] Completed: which crictl: (1.962775321s)
	I0927 01:41:59.788412   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.962779963s)
	I0927 01:41:59.788429   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0927 01:41:59.788457   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:59.788462   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:41:59.788499   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0927 01:41:57.043267   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.542589   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:57.459172   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:57.960231   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.459323   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:58.960219   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.459916   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.959858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.460249   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:00.959246   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.459839   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:01.959224   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:41:59.490443   69534 pod_ready.go:103] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.992727   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.992753   69534 pod_ready.go:82] duration metric: took 7.508057707s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.992766   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998326   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:41:59.998357   69534 pod_ready.go:82] duration metric: took 5.584215ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:41:59.998372   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003176   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.003197   69534 pod_ready.go:82] duration metric: took 4.816939ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.003209   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009089   69534 pod_ready.go:93] pod "kube-proxy-xm2p8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.009110   69534 pod_ready.go:82] duration metric: took 5.893939ms for pod "kube-proxy-xm2p8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.009119   69534 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014172   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:00.014197   69534 pod_ready.go:82] duration metric: took 5.072107ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:00.014209   69534 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:02.021372   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.758278   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969794291s)
	I0927 01:42:01.758369   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:01.758392   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.969869427s)
	I0927 01:42:01.758415   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0927 01:42:01.758445   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.758494   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0927 01:42:01.796910   68676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:03.934871   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.176354046s)
	I0927 01:42:03.934903   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0927 01:42:03.934921   68676 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.934927   68676 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.137986898s)
	I0927 01:42:03.934972   68676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 01:42:03.934994   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0927 01:42:03.935050   68676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:03.939942   68676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0927 01:42:02.042617   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:04.042848   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:02.460232   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:02.959635   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.459610   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:03.959412   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.459857   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.959495   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.459972   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:05.959931   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.459460   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:06.959627   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:04.021759   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:06.521921   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.308972   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.373952677s)
	I0927 01:42:07.308999   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0927 01:42:07.309024   68676 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:07.309070   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0927 01:42:09.378517   68676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.06942074s)
	I0927 01:42:09.378550   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0927 01:42:09.378579   68676 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:09.378629   68676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0927 01:42:06.546731   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:09.044481   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.459395   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:07.959574   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.460234   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:08.959281   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.459240   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.959429   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.459865   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:10.959431   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.459459   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:11.959447   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:09.020456   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:11.021689   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:10.030049   68676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19711-14935/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 01:42:10.030100   68676 cache_images.go:123] Successfully loaded all cached images
	I0927 01:42:10.030106   68676 cache_images.go:92] duration metric: took 15.09209404s to LoadCachedImages
	I0927 01:42:10.030118   68676 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.31.1 crio true true} ...
	I0927 01:42:10.030211   68676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-521072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:42:10.030273   68676 ssh_runner.go:195] Run: crio config
	I0927 01:42:10.078318   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:10.078342   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:10.078351   68676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:42:10.078370   68676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-521072 NodeName:no-preload-521072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:42:10.078506   68676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-521072"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:42:10.078580   68676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:42:10.089137   68676 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:42:10.089212   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:42:10.098310   68676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 01:42:10.116172   68676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:42:10.134642   68676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0927 01:42:10.152442   68676 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0927 01:42:10.156477   68676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:42:10.169007   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:10.288382   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:10.306047   68676 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072 for IP: 192.168.50.246
	I0927 01:42:10.306077   68676 certs.go:194] generating shared ca certs ...
	I0927 01:42:10.306096   68676 certs.go:226] acquiring lock for ca certs: {Name:mkdfc5b7e93f77f5ae72cc653545624244421aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:10.306276   68676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key
	I0927 01:42:10.306331   68676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key
	I0927 01:42:10.306350   68676 certs.go:256] generating profile certs ...
	I0927 01:42:10.306453   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/client.key
	I0927 01:42:10.306553   68676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key.735097eb
	I0927 01:42:10.306613   68676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key
	I0927 01:42:10.306761   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem (1338 bytes)
	W0927 01:42:10.306797   68676 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138_empty.pem, impossibly tiny 0 bytes
	I0927 01:42:10.306808   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca-key.pem (1671 bytes)
	I0927 01:42:10.306833   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:42:10.306854   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:42:10.306878   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/certs/key.pem (1675 bytes)
	I0927 01:42:10.306916   68676 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem (1708 bytes)
	I0927 01:42:10.307598   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:42:10.344570   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 01:42:10.386834   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:42:10.432022   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 01:42:10.462348   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:42:10.490015   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:42:10.518144   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:42:10.545290   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:42:10.572460   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:42:10.597526   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/certs/22138.pem --> /usr/share/ca-certificates/22138.pem (1338 bytes)
	I0927 01:42:10.622287   68676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/ssl/certs/221382.pem --> /usr/share/ca-certificates/221382.pem (1708 bytes)
	I0927 01:42:10.646020   68676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:42:10.662972   68676 ssh_runner.go:195] Run: openssl version
	I0927 01:42:10.668844   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22138.pem && ln -fs /usr/share/ca-certificates/22138.pem /etc/ssl/certs/22138.pem"
	I0927 01:42:10.680020   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684620   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:32 /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.684678   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22138.pem
	I0927 01:42:10.690694   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22138.pem /etc/ssl/certs/51391683.0"
	I0927 01:42:10.702115   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221382.pem && ln -fs /usr/share/ca-certificates/221382.pem /etc/ssl/certs/221382.pem"
	I0927 01:42:10.713424   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717918   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:32 /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.717971   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221382.pem
	I0927 01:42:10.723601   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221382.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:42:10.734870   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:42:10.747370   68676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752016   68676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:16 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.752072   68676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:42:10.757964   68676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:42:10.769560   68676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:42:10.774457   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:42:10.780719   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:42:10.786653   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:42:10.792671   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:42:10.798674   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:42:10.804910   68676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:42:10.811007   68676 kubeadm.go:392] StartCluster: {Name:no-preload-521072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-521072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:42:10.811114   68676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:42:10.811178   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.851017   68676 cri.go:89] found id: ""
	I0927 01:42:10.851084   68676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:42:10.864997   68676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:42:10.865016   68676 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:42:10.865062   68676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:42:10.877088   68676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:42:10.878133   68676 kubeconfig.go:125] found "no-preload-521072" server: "https://192.168.50.246:8443"
	I0927 01:42:10.880637   68676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:42:10.893554   68676 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.246
	I0927 01:42:10.893578   68676 kubeadm.go:1160] stopping kube-system containers ...
	I0927 01:42:10.893592   68676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 01:42:10.893629   68676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:42:10.935734   68676 cri.go:89] found id: ""
	I0927 01:42:10.935794   68676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 01:42:10.954141   68676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:42:10.965345   68676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:42:10.965363   68676 kubeadm.go:157] found existing configuration files:
	
	I0927 01:42:10.965413   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:42:10.975561   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:42:10.975628   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:42:10.985747   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:42:10.995026   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:42:10.995089   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:42:11.006650   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.016964   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:42:11.017034   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:42:11.028756   68676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:42:11.039002   68676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:42:11.039072   68676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:42:11.050382   68676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:42:11.060839   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:11.177447   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.481118   68676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.303633907s)
	I0927 01:42:12.481149   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.706344   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.774938   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:12.866467   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:42:12.866552   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.366860   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.866951   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.882411   68676 api_server.go:72] duration metric: took 1.015943274s to wait for apiserver process to appear ...
	I0927 01:42:13.882435   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:42:13.882457   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:13.882963   68676 api_server.go:269] stopped: https://192.168.50.246:8443/healthz: Get "https://192.168.50.246:8443/healthz": dial tcp 192.168.50.246:8443: connect: connection refused
	I0927 01:42:14.382489   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:11.543818   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:14.042536   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:12.459771   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:12.959727   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.459428   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.959255   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.460003   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:14.959853   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.460237   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:15.959974   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.459420   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:16.959321   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:13.527793   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:16.023080   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.124839   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 01:42:17.124867   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 01:42:17.124885   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.174869   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.174905   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.383128   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.389594   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.389629   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:17.883197   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:17.888706   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:17.888734   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.382982   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.387847   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.387877   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:18.882844   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:18.887144   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:18.887178   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.382711   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.388007   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.388037   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:19.882613   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:19.886781   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 01:42:19.886801   68676 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 01:42:20.382907   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:42:20.387083   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:42:20.393697   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:42:20.393725   68676 api_server.go:131] duration metric: took 6.511280572s to wait for apiserver health ...
	I0927 01:42:20.393735   68676 cni.go:84] Creating CNI manager for ""
	I0927 01:42:20.393743   68676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:42:20.395270   68676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:42:16.543525   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:19.041726   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:20.396770   68676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:42:20.407891   68676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:42:20.427815   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:42:20.436940   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:42:20.436980   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 01:42:20.436989   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 01:42:20.436997   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 01:42:20.437005   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 01:42:20.437012   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:42:20.437020   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 01:42:20.437032   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:42:20.437038   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:42:20.437049   68676 system_pods.go:74] duration metric: took 9.213874ms to wait for pod list to return data ...
	I0927 01:42:20.437057   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:42:20.440323   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:42:20.440345   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:42:20.440356   68676 node_conditions.go:105] duration metric: took 3.294768ms to run NodePressure ...
	I0927 01:42:20.440372   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 01:42:20.710186   68676 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713940   68676 kubeadm.go:739] kubelet initialised
	I0927 01:42:20.713958   68676 kubeadm.go:740] duration metric: took 3.749241ms waiting for restarted kubelet to initialise ...
	I0927 01:42:20.713965   68676 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:20.718807   68676 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.722955   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722976   68676 pod_ready.go:82] duration metric: took 4.147896ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.722984   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.722991   68676 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.727569   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727596   68676 pod_ready.go:82] duration metric: took 4.598426ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.727604   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "etcd-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.727611   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.731845   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731871   68676 pod_ready.go:82] duration metric: took 4.25326ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.731881   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-apiserver-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.731889   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:20.830881   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830909   68676 pod_ready.go:82] duration metric: took 99.009569ms for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:20.830918   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:20.830923   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.232434   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232463   68676 pod_ready.go:82] duration metric: took 401.530413ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.232473   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-proxy-wkcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.232485   68676 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:21.630791   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630818   68676 pod_ready.go:82] duration metric: took 398.325039ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:21.630829   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "kube-scheduler-no-preload-521072" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.630836   68676 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:22.032173   68676 pod_ready.go:98] node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032200   68676 pod_ready.go:82] duration metric: took 401.353533ms for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:42:22.032208   68676 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-521072" hosting pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:22.032215   68676 pod_ready.go:39] duration metric: took 1.318241972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:22.032233   68676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:42:22.046872   68676 ops.go:34] apiserver oom_adj: -16
	I0927 01:42:22.046898   68676 kubeadm.go:597] duration metric: took 11.181875532s to restartPrimaryControlPlane
	I0927 01:42:22.046908   68676 kubeadm.go:394] duration metric: took 11.235909243s to StartCluster
	I0927 01:42:22.046923   68676 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.046984   68676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:42:22.048611   68676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:42:22.048864   68676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:42:22.048932   68676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:42:22.049029   68676 addons.go:69] Setting storage-provisioner=true in profile "no-preload-521072"
	I0927 01:42:22.049050   68676 addons.go:234] Setting addon storage-provisioner=true in "no-preload-521072"
	W0927 01:42:22.049060   68676 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:42:22.049066   68676 addons.go:69] Setting default-storageclass=true in profile "no-preload-521072"
	I0927 01:42:22.049088   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049092   68676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-521072"
	I0927 01:42:22.049096   68676 addons.go:69] Setting metrics-server=true in profile "no-preload-521072"
	I0927 01:42:22.049117   68676 addons.go:234] Setting addon metrics-server=true in "no-preload-521072"
	I0927 01:42:22.049123   68676 config.go:182] Loaded profile config "no-preload-521072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0927 01:42:22.049134   68676 addons.go:243] addon metrics-server should already be in state true
	I0927 01:42:22.049167   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.049423   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049455   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049478   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049507   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.049535   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.049555   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.050564   68676 out.go:177] * Verifying Kubernetes components...
	I0927 01:42:22.051717   68676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:42:22.088020   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0927 01:42:22.088454   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.088964   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.088985   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.089333   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.089793   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.089825   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.091735   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I0927 01:42:22.091853   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0927 01:42:22.092236   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092295   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.092659   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092677   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.092817   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.092840   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.093170   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093344   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.093387   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.093922   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.093949   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.097310   68676 addons.go:234] Setting addon default-storageclass=true in "no-preload-521072"
	W0927 01:42:22.097333   68676 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:42:22.097368   68676 host.go:66] Checking if "no-preload-521072" exists ...
	I0927 01:42:22.097705   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.097747   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.110628   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0927 01:42:22.111053   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.111604   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.111629   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.112113   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.112329   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.113354   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0927 01:42:22.114009   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.114749   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.115666   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.115690   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.116105   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.116374   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.116862   68676 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:42:22.118124   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.118135   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:42:22.118162   68676 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:42:22.118180   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.119866   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0927 01:42:22.120317   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.120908   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.120931   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.121113   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.121319   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.121556   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.121576   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.122025   68676 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:42:22.122051   68676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:42:22.122280   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.122487   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.122652   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.122781   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.126076   68676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:42:17.459443   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:17.959426   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.460250   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.959989   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.459981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:19.959969   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.459758   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:20.959440   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.460115   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:21.959238   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:18.521751   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:21.020226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.021393   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.127430   68676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.127446   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:42:22.127460   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.130498   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131040   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.131061   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.131357   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.131544   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.131670   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.131997   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.138657   68676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0927 01:42:22.138987   68676 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:42:22.139420   68676 main.go:141] libmachine: Using API Version  1
	I0927 01:42:22.139438   68676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:42:22.139824   68676 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:42:22.139998   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetState
	I0927 01:42:22.141454   68676 main.go:141] libmachine: (no-preload-521072) Calling .DriverName
	I0927 01:42:22.141664   68676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.141673   68676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:42:22.141683   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHHostname
	I0927 01:42:22.144221   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144651   68676 main.go:141] libmachine: (no-preload-521072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:27:74", ip: ""} in network mk-no-preload-521072: {Iface:virbr2 ExpiryTime:2024-09-27 02:41:45 +0000 UTC Type:0 Mac:52:54:00:85:27:74 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:no-preload-521072 Clientid:01:52:54:00:85:27:74}
	I0927 01:42:22.144670   68676 main.go:141] libmachine: (no-preload-521072) DBG | domain no-preload-521072 has defined IP address 192.168.50.246 and MAC address 52:54:00:85:27:74 in network mk-no-preload-521072
	I0927 01:42:22.144765   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHPort
	I0927 01:42:22.144931   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHKeyPath
	I0927 01:42:22.145071   68676 main.go:141] libmachine: (no-preload-521072) Calling .GetSSHUsername
	I0927 01:42:22.145208   68676 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/no-preload-521072/id_rsa Username:docker}
	I0927 01:42:22.244289   68676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:42:22.261345   68676 node_ready.go:35] waiting up to 6m0s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:22.365923   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:42:22.365953   68676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:42:22.387392   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:42:22.389353   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:42:22.389379   68676 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:42:22.406994   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:42:22.491559   68676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:22.491581   68676 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:42:22.586476   68676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:42:23.660676   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273241029s)
	I0927 01:42:23.660733   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660750   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660732   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.253706672s)
	I0927 01:42:23.660831   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.660841   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.660851   68676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.074315804s)
	I0927 01:42:23.661081   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661098   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661109   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661108   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661118   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661153   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661205   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661161   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661223   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661230   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661238   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661125   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661607   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661608   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661621   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.661631   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661632   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661637   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661641   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661645   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.661649   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661650   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661653   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.661852   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.661866   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.661874   68676 addons.go:475] Verifying addon metrics-server=true in "no-preload-521072"
	I0927 01:42:23.661917   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.668484   68676 main.go:141] libmachine: Making call to close driver server
	I0927 01:42:23.668499   68676 main.go:141] libmachine: (no-preload-521072) Calling .Close
	I0927 01:42:23.668711   68676 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:42:23.668726   68676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:42:23.668743   68676 main.go:141] libmachine: (no-preload-521072) DBG | Closing plugin on server side
	I0927 01:42:23.670758   68676 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0927 01:42:23.672072   68676 addons.go:510] duration metric: took 1.62313879s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0927 01:42:24.265426   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:21.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:23.043831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:25.546335   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.460161   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:22.959177   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.459481   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:23.959221   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:23.959322   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:24.004970   69333 cri.go:89] found id: ""
	I0927 01:42:24.004999   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.005010   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:24.005017   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:24.005076   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:24.041880   69333 cri.go:89] found id: ""
	I0927 01:42:24.041908   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.041919   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:24.041926   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:24.041991   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:24.082295   69333 cri.go:89] found id: ""
	I0927 01:42:24.082318   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.082325   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:24.082331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:24.082385   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:24.119663   69333 cri.go:89] found id: ""
	I0927 01:42:24.119692   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.119707   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:24.119714   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:24.119771   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:24.163893   69333 cri.go:89] found id: ""
	I0927 01:42:24.163920   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.163932   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:24.163940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:24.163999   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:24.200277   69333 cri.go:89] found id: ""
	I0927 01:42:24.200299   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.200307   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:24.200312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:24.200365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:24.235039   69333 cri.go:89] found id: ""
	I0927 01:42:24.235059   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.235066   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:24.235072   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:24.235132   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:24.275160   69333 cri.go:89] found id: ""
	I0927 01:42:24.275181   69333 logs.go:276] 0 containers: []
	W0927 01:42:24.275188   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:24.275196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:24.275206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:24.327432   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:24.327465   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:24.341113   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:24.341139   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:24.473741   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:24.473764   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:24.473779   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:24.545888   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:24.545923   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:27.086673   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:27.100552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:27.100623   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:27.136182   69333 cri.go:89] found id: ""
	I0927 01:42:27.136207   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.136215   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:27.136221   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:27.136289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:27.173258   69333 cri.go:89] found id: ""
	I0927 01:42:27.173285   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.173296   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:27.173303   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:27.173373   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:27.210481   69333 cri.go:89] found id: ""
	I0927 01:42:27.210514   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.210526   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:27.210533   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:27.210586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:27.245168   69333 cri.go:89] found id: ""
	I0927 01:42:27.245192   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.245200   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:27.245206   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:27.245252   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:27.280494   69333 cri.go:89] found id: ""
	I0927 01:42:27.280522   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.280531   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:27.280538   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:27.280596   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:27.314281   69333 cri.go:89] found id: ""
	I0927 01:42:27.314307   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.314316   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:27.314322   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:27.314392   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:25.521413   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.019989   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:26.764721   68676 node_ready.go:53] node "no-preload-521072" has status "Ready":"False"
	I0927 01:42:27.765574   68676 node_ready.go:49] node "no-preload-521072" has status "Ready":"True"
	I0927 01:42:27.765597   68676 node_ready.go:38] duration metric: took 5.504217374s for node "no-preload-521072" to be "Ready" ...
	I0927 01:42:27.765609   68676 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:42:27.772263   68676 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777521   68676 pod_ready.go:93] pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.777544   68676 pod_ready.go:82] duration metric: took 5.252259ms for pod "coredns-7c65d6cfc9-7q54t" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.777552   68676 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781511   68676 pod_ready.go:93] pod "etcd-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.781528   68676 pod_ready.go:82] duration metric: took 3.970559ms for pod "etcd-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.781535   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785556   68676 pod_ready.go:93] pod "kube-apiserver-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:27.785572   68676 pod_ready.go:82] duration metric: took 4.032023ms for pod "kube-apiserver-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:27.785579   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:29.792899   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.041166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:30.041766   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:27.350838   69333 cri.go:89] found id: ""
	I0927 01:42:27.350861   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.350869   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:27.350874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:27.350921   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:27.390146   69333 cri.go:89] found id: ""
	I0927 01:42:27.390175   69333 logs.go:276] 0 containers: []
	W0927 01:42:27.390186   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:27.390196   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:27.390206   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:27.446727   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:27.446756   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:27.461337   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:27.461365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:27.533818   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:27.533839   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:27.533874   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:27.614325   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:27.614357   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.161303   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:30.179521   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:30.179590   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:30.221738   69333 cri.go:89] found id: ""
	I0927 01:42:30.221764   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.221772   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:30.221778   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:30.221841   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:30.258316   69333 cri.go:89] found id: ""
	I0927 01:42:30.258349   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.258359   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:30.258369   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:30.258427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:30.297079   69333 cri.go:89] found id: ""
	I0927 01:42:30.297102   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.297109   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:30.297114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:30.297159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:30.337969   69333 cri.go:89] found id: ""
	I0927 01:42:30.337995   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.338007   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:30.338014   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:30.338075   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:30.375946   69333 cri.go:89] found id: ""
	I0927 01:42:30.375975   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.375986   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:30.375993   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:30.376054   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:30.411673   69333 cri.go:89] found id: ""
	I0927 01:42:30.411700   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.411710   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:30.411718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:30.411765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:30.447784   69333 cri.go:89] found id: ""
	I0927 01:42:30.447812   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.447822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:30.447830   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:30.447890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:30.483164   69333 cri.go:89] found id: ""
	I0927 01:42:30.483191   69333 logs.go:276] 0 containers: []
	W0927 01:42:30.483202   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:30.483213   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:30.483229   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:30.533490   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:30.533522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:30.547688   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:30.547722   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:30.626696   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:30.626720   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:30.626733   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:30.708767   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:30.708809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:30.020786   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.021243   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.292370   68676 pod_ready.go:103] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.791420   68676 pod_ready.go:93] pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.791444   68676 pod_ready.go:82] duration metric: took 5.00585892s for pod "kube-controller-manager-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.791454   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796509   68676 pod_ready.go:93] pod "kube-proxy-wkcb8" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.796528   68676 pod_ready.go:82] duration metric: took 5.067798ms for pod "kube-proxy-wkcb8" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.796536   68676 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801041   68676 pod_ready.go:93] pod "kube-scheduler-no-preload-521072" in "kube-system" namespace has status "Ready":"True"
	I0927 01:42:32.801066   68676 pod_ready.go:82] duration metric: took 4.523416ms for pod "kube-scheduler-no-preload-521072" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:32.801087   68676 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	I0927 01:42:34.807359   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.042216   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:34.541390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:33.250034   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:33.263733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:33.263805   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:33.298038   69333 cri.go:89] found id: ""
	I0927 01:42:33.298063   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.298071   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:33.298077   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:33.298139   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:33.338027   69333 cri.go:89] found id: ""
	I0927 01:42:33.338050   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.338058   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:33.338064   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:33.338118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:33.376470   69333 cri.go:89] found id: ""
	I0927 01:42:33.376496   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.376504   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:33.376509   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:33.376568   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:33.419831   69333 cri.go:89] found id: ""
	I0927 01:42:33.419859   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.419868   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:33.419874   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:33.419929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:33.461029   69333 cri.go:89] found id: ""
	I0927 01:42:33.461057   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.461076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:33.461085   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:33.461158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:33.499968   69333 cri.go:89] found id: ""
	I0927 01:42:33.499996   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.500007   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:33.500015   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:33.500073   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:33.552601   69333 cri.go:89] found id: ""
	I0927 01:42:33.552625   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.552633   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:33.552640   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:33.552702   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:33.589491   69333 cri.go:89] found id: ""
	I0927 01:42:33.589520   69333 logs.go:276] 0 containers: []
	W0927 01:42:33.589529   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:33.589540   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:33.589554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:33.643437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:33.643470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:33.657819   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:33.657846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:33.728369   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:33.728393   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:33.728407   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:33.803661   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:33.803691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:36.343598   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:36.357879   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:36.357937   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:36.398936   69333 cri.go:89] found id: ""
	I0927 01:42:36.398958   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.398966   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:36.398971   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:36.399016   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:36.438897   69333 cri.go:89] found id: ""
	I0927 01:42:36.438921   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.438928   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:36.438935   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:36.438979   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:36.476779   69333 cri.go:89] found id: ""
	I0927 01:42:36.476807   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.476817   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:36.476824   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:36.476882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:36.514216   69333 cri.go:89] found id: ""
	I0927 01:42:36.514238   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.514245   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:36.514251   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:36.514306   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:36.551800   69333 cri.go:89] found id: ""
	I0927 01:42:36.551827   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.551835   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:36.551841   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:36.551900   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:36.592060   69333 cri.go:89] found id: ""
	I0927 01:42:36.592086   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.592096   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:36.592101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:36.592172   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:36.633485   69333 cri.go:89] found id: ""
	I0927 01:42:36.633507   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.633514   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:36.633519   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:36.633571   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:36.667288   69333 cri.go:89] found id: ""
	I0927 01:42:36.667355   69333 logs.go:276] 0 containers: []
	W0927 01:42:36.667366   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:36.667377   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:36.667391   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:36.722230   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:36.722263   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:36.735927   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:36.735952   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:36.808852   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:36.808872   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:36.808887   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:36.889259   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:36.889299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:34.520143   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.521254   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.808388   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.308743   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.542085   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.042119   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:39.438818   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:39.459082   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:39.459150   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:39.499966   69333 cri.go:89] found id: ""
	I0927 01:42:39.499991   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.499999   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:39.500004   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:39.500050   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:39.540828   69333 cri.go:89] found id: ""
	I0927 01:42:39.540850   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.540857   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:39.540864   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:39.540972   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:39.575841   69333 cri.go:89] found id: ""
	I0927 01:42:39.575868   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.575879   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:39.575886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:39.575958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:39.611105   69333 cri.go:89] found id: ""
	I0927 01:42:39.611184   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.611202   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:39.611212   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:39.611268   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:39.644772   69333 cri.go:89] found id: ""
	I0927 01:42:39.644800   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.644808   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:39.644813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:39.644868   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:39.679875   69333 cri.go:89] found id: ""
	I0927 01:42:39.679901   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.679912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:39.679919   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:39.679987   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:39.716410   69333 cri.go:89] found id: ""
	I0927 01:42:39.716440   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.716450   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:39.716457   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:39.716525   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:39.750391   69333 cri.go:89] found id: ""
	I0927 01:42:39.750418   69333 logs.go:276] 0 containers: []
	W0927 01:42:39.750428   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:39.750439   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:39.750455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:39.822365   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:39.822401   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:39.822416   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:39.905982   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:39.906017   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:39.952310   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:39.952339   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:40.000523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:40.000554   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:39.021945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.519787   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.807532   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:44.307548   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.042260   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:43.042762   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.542112   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:42.514379   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:42.528312   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:42.528377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:42.562427   69333 cri.go:89] found id: ""
	I0927 01:42:42.562455   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.562463   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:42.562469   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:42.562526   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:42.599969   69333 cri.go:89] found id: ""
	I0927 01:42:42.599993   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.600002   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:42.600007   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:42.600053   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:42.636338   69333 cri.go:89] found id: ""
	I0927 01:42:42.636364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.636371   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:42.636376   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:42.636431   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:42.670781   69333 cri.go:89] found id: ""
	I0927 01:42:42.670809   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.670818   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:42.670823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:42.670880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:42.707334   69333 cri.go:89] found id: ""
	I0927 01:42:42.707364   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.707375   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:42.707431   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:42.707503   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:42.743063   69333 cri.go:89] found id: ""
	I0927 01:42:42.743092   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.743103   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:42.743139   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:42.743192   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:42.778593   69333 cri.go:89] found id: ""
	I0927 01:42:42.778617   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.778628   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:42.778634   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:42.778700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:42.814261   69333 cri.go:89] found id: ""
	I0927 01:42:42.814286   69333 logs.go:276] 0 containers: []
	W0927 01:42:42.814293   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:42.814300   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:42.814310   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:42.863982   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:42.864011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:42.877151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:42.877175   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:42.959233   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:42.959251   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:42.959262   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:43.038773   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:43.038805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:45.581272   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:45.596103   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:45.596167   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:45.639507   69333 cri.go:89] found id: ""
	I0927 01:42:45.639531   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.639539   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:45.639545   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:45.639611   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:45.678455   69333 cri.go:89] found id: ""
	I0927 01:42:45.678482   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.678489   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:45.678495   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:45.678539   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:45.722094   69333 cri.go:89] found id: ""
	I0927 01:42:45.722123   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.722135   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:45.722142   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:45.722211   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:45.758091   69333 cri.go:89] found id: ""
	I0927 01:42:45.758118   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.758127   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:45.758133   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:45.758183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:45.792976   69333 cri.go:89] found id: ""
	I0927 01:42:45.793010   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.793021   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:45.793028   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:45.793089   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:45.830235   69333 cri.go:89] found id: ""
	I0927 01:42:45.830262   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.830273   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:45.830280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:45.830324   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:45.865896   69333 cri.go:89] found id: ""
	I0927 01:42:45.865928   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.865938   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:45.865946   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:45.866000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:45.900058   69333 cri.go:89] found id: ""
	I0927 01:42:45.900088   69333 logs.go:276] 0 containers: []
	W0927 01:42:45.900099   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:45.900108   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:45.900119   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:45.972986   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:45.973015   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:45.973030   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:46.048703   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:46.048732   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:46.087483   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:46.087515   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:46.136833   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:46.136866   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:43.520998   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.522532   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.020912   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:46.307637   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.808963   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.041757   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:50.042259   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:48.650738   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:48.665847   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:48.665930   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:48.704304   69333 cri.go:89] found id: ""
	I0927 01:42:48.704328   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.704337   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:48.704342   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:48.704402   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:48.742469   69333 cri.go:89] found id: ""
	I0927 01:42:48.742499   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.742510   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:48.742517   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:48.742579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:48.782154   69333 cri.go:89] found id: ""
	I0927 01:42:48.782183   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.782194   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:48.782201   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:48.782261   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:48.821686   69333 cri.go:89] found id: ""
	I0927 01:42:48.821709   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.821717   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:48.821723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:48.821781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:48.867072   69333 cri.go:89] found id: ""
	I0927 01:42:48.867099   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.867109   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:48.867123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:48.867191   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:48.908215   69333 cri.go:89] found id: ""
	I0927 01:42:48.908241   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.908249   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:48.908255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:48.908312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:48.945260   69333 cri.go:89] found id: ""
	I0927 01:42:48.945291   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.945303   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:48.945310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:48.945375   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:48.983285   69333 cri.go:89] found id: ""
	I0927 01:42:48.983325   69333 logs.go:276] 0 containers: []
	W0927 01:42:48.983333   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:48.983343   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:48.983354   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:49.039437   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:49.039472   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:49.053546   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:49.053571   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:49.129264   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:49.129286   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:49.129299   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:49.216967   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:49.216999   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:51.758143   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:51.771417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:51.771485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:51.806120   69333 cri.go:89] found id: ""
	I0927 01:42:51.806144   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.806154   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:51.806161   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:51.806219   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:51.840301   69333 cri.go:89] found id: ""
	I0927 01:42:51.840330   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.840340   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:51.840348   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:51.840410   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:51.874908   69333 cri.go:89] found id: ""
	I0927 01:42:51.874934   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.874944   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:51.874952   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:51.875018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:51.910960   69333 cri.go:89] found id: ""
	I0927 01:42:51.910988   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.910999   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:51.911006   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:51.911064   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:51.945206   69333 cri.go:89] found id: ""
	I0927 01:42:51.945228   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.945236   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:51.945241   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:51.945289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:51.979262   69333 cri.go:89] found id: ""
	I0927 01:42:51.979296   69333 logs.go:276] 0 containers: []
	W0927 01:42:51.979322   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:51.979328   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:51.979384   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:52.013407   69333 cri.go:89] found id: ""
	I0927 01:42:52.013438   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.013449   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:52.013456   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:52.013510   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:52.048928   69333 cri.go:89] found id: ""
	I0927 01:42:52.048951   69333 logs.go:276] 0 containers: []
	W0927 01:42:52.048961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:52.048970   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:52.048984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:52.101043   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:52.101083   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:52.115903   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:52.115938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:52.197147   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:52.197168   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:52.197184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:52.276352   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:52.276393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:50.021730   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.520362   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:51.306847   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:53.307714   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.042729   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.544118   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.819649   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:54.832262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:54.832344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:54.867495   69333 cri.go:89] found id: ""
	I0927 01:42:54.867523   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.867533   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:54.867539   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:54.867585   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:54.899705   69333 cri.go:89] found id: ""
	I0927 01:42:54.899732   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.899742   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:54.899749   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:54.899817   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:54.939216   69333 cri.go:89] found id: ""
	I0927 01:42:54.939235   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.939244   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:54.939249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:54.939293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:54.976603   69333 cri.go:89] found id: ""
	I0927 01:42:54.976632   69333 logs.go:276] 0 containers: []
	W0927 01:42:54.976643   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:54.976651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:54.976718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:55.011617   69333 cri.go:89] found id: ""
	I0927 01:42:55.011649   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.011660   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:55.011667   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:55.011729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:55.048836   69333 cri.go:89] found id: ""
	I0927 01:42:55.048861   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.048869   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:55.048885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:55.048955   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:55.085105   69333 cri.go:89] found id: ""
	I0927 01:42:55.085133   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.085144   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:55.085151   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:55.085205   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:55.122536   69333 cri.go:89] found id: ""
	I0927 01:42:55.122564   69333 logs.go:276] 0 containers: []
	W0927 01:42:55.122575   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:55.122585   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:55.122600   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:55.197191   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:55.197216   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:55.197230   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:55.275914   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:55.275950   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:55.315043   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:55.315071   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:55.365808   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:55.365846   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:55.025083   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.520041   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:55.807377   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.807419   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.808202   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.042511   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.541628   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:57.880934   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:42:57.894276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:42:57.894337   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:42:57.933299   69333 cri.go:89] found id: ""
	I0927 01:42:57.933326   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.933336   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:42:57.933343   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:42:57.933403   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:42:57.969070   69333 cri.go:89] found id: ""
	I0927 01:42:57.969094   69333 logs.go:276] 0 containers: []
	W0927 01:42:57.969102   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:42:57.969107   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:42:57.969151   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:42:58.009432   69333 cri.go:89] found id: ""
	I0927 01:42:58.009453   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.009462   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:42:58.009468   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:42:58.009524   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:42:58.046507   69333 cri.go:89] found id: ""
	I0927 01:42:58.046526   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.046533   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:42:58.046539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:42:58.046603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:42:58.079910   69333 cri.go:89] found id: ""
	I0927 01:42:58.079936   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.079947   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:42:58.079954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:42:58.080015   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:42:58.115971   69333 cri.go:89] found id: ""
	I0927 01:42:58.115994   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.116001   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:42:58.116007   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:42:58.116065   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:42:58.150512   69333 cri.go:89] found id: ""
	I0927 01:42:58.150536   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.150544   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:42:58.150549   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:42:58.150608   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:42:58.183458   69333 cri.go:89] found id: ""
	I0927 01:42:58.183487   69333 logs.go:276] 0 containers: []
	W0927 01:42:58.183498   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:42:58.183506   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:42:58.183520   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:42:58.234404   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:42:58.234434   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:42:58.248387   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:42:58.248411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:42:58.320751   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:42:58.320772   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:42:58.320783   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:42:58.401163   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:42:58.401212   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:00.943677   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:00.956739   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:00.956815   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:00.991020   69333 cri.go:89] found id: ""
	I0927 01:43:00.991042   69333 logs.go:276] 0 containers: []
	W0927 01:43:00.991051   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:00.991056   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:00.991113   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:01.031686   69333 cri.go:89] found id: ""
	I0927 01:43:01.031711   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.031720   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:01.031726   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:01.031786   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:01.068783   69333 cri.go:89] found id: ""
	I0927 01:43:01.068813   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.068824   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:01.068831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:01.068890   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:01.108434   69333 cri.go:89] found id: ""
	I0927 01:43:01.108456   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.108464   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:01.108469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:01.108513   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:01.147574   69333 cri.go:89] found id: ""
	I0927 01:43:01.147596   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.147604   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:01.147610   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:01.147660   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:01.188251   69333 cri.go:89] found id: ""
	I0927 01:43:01.188279   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.188290   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:01.188297   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:01.188359   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:01.224901   69333 cri.go:89] found id: ""
	I0927 01:43:01.224944   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.224964   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:01.224974   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:01.225052   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:01.262701   69333 cri.go:89] found id: ""
	I0927 01:43:01.262728   69333 logs.go:276] 0 containers: []
	W0927 01:43:01.262738   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:01.262749   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:01.262762   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:01.313872   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:01.313900   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:01.327809   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:01.327835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:01.400864   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:01.400895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:01.400909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:01.478012   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:01.478045   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:42:59.520973   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.522457   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:02.308215   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.309111   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.543151   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.043201   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.018634   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:04.032732   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:04.032803   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:04.075258   69333 cri.go:89] found id: ""
	I0927 01:43:04.075285   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.075293   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:04.075299   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:04.075381   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:04.108738   69333 cri.go:89] found id: ""
	I0927 01:43:04.108764   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.108773   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:04.108779   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:04.108835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:04.142115   69333 cri.go:89] found id: ""
	I0927 01:43:04.142145   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.142155   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:04.142174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:04.142249   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:04.184606   69333 cri.go:89] found id: ""
	I0927 01:43:04.184626   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.184634   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:04.184639   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:04.184684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:04.218391   69333 cri.go:89] found id: ""
	I0927 01:43:04.218420   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.218428   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:04.218434   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:04.218482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.253796   69333 cri.go:89] found id: ""
	I0927 01:43:04.253816   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.253824   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:04.253829   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:04.253884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:04.289147   69333 cri.go:89] found id: ""
	I0927 01:43:04.289170   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.289179   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:04.289184   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:04.289245   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:04.329000   69333 cri.go:89] found id: ""
	I0927 01:43:04.329026   69333 logs.go:276] 0 containers: []
	W0927 01:43:04.329034   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:04.329042   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:04.329053   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:04.424255   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:04.424290   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:04.470746   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:04.470775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:04.524208   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:04.524237   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:04.538338   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:04.538365   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:04.608713   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.109492   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:07.124253   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:07.124332   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:07.160443   69333 cri.go:89] found id: ""
	I0927 01:43:07.160470   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.160481   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:07.160488   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:07.160554   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:07.195492   69333 cri.go:89] found id: ""
	I0927 01:43:07.195515   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.195522   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:07.195527   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:07.195572   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:07.237678   69333 cri.go:89] found id: ""
	I0927 01:43:07.237708   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.237718   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:07.237725   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:07.237792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:07.274239   69333 cri.go:89] found id: ""
	I0927 01:43:07.274268   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.274279   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:07.274286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:07.274352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:07.315099   69333 cri.go:89] found id: ""
	I0927 01:43:07.315124   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.315131   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:07.315137   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:07.315190   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:04.020911   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.520371   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.807124   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.306568   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.543210   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.042166   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:07.356301   69333 cri.go:89] found id: ""
	I0927 01:43:07.356328   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.356339   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:07.356347   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:07.356416   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:07.392204   69333 cri.go:89] found id: ""
	I0927 01:43:07.392232   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.392242   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:07.392255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:07.392312   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:07.428924   69333 cri.go:89] found id: ""
	I0927 01:43:07.428952   69333 logs.go:276] 0 containers: []
	W0927 01:43:07.428961   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:07.428969   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:07.428981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:07.502507   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:07.502531   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:07.502545   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:07.584169   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:07.584201   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:07.623413   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:07.623446   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:07.675444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:07.675480   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.190164   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:10.205315   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:10.205395   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:10.244030   69333 cri.go:89] found id: ""
	I0927 01:43:10.244053   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.244063   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:10.244071   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:10.244134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:10.280081   69333 cri.go:89] found id: ""
	I0927 01:43:10.280108   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.280118   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:10.280125   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:10.280184   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:10.315428   69333 cri.go:89] found id: ""
	I0927 01:43:10.315454   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.315464   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:10.315471   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:10.315531   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:10.352536   69333 cri.go:89] found id: ""
	I0927 01:43:10.352560   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.352567   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:10.352574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:10.352634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:10.388846   69333 cri.go:89] found id: ""
	I0927 01:43:10.388870   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.388880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:10.388887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:10.388951   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:10.427746   69333 cri.go:89] found id: ""
	I0927 01:43:10.427771   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.427779   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:10.427784   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:10.427839   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:10.473126   69333 cri.go:89] found id: ""
	I0927 01:43:10.473155   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.473166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:10.473172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:10.473234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:10.511925   69333 cri.go:89] found id: ""
	I0927 01:43:10.511954   69333 logs.go:276] 0 containers: []
	W0927 01:43:10.511962   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:10.511971   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:10.511984   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:10.551428   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:10.551459   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:10.603655   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:10.603691   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:10.617232   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:10.617266   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:10.696559   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:10.696585   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:10.696599   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:09.020784   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.521429   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.307081   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.307876   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.043819   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.543289   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.273888   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:13.288271   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:13.288349   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:13.325796   69333 cri.go:89] found id: ""
	I0927 01:43:13.325823   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.325831   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:13.325837   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:13.325893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:13.360721   69333 cri.go:89] found id: ""
	I0927 01:43:13.360748   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.360756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:13.360762   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:13.360821   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:13.399722   69333 cri.go:89] found id: ""
	I0927 01:43:13.399749   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.399756   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:13.399762   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:13.399826   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:13.437161   69333 cri.go:89] found id: ""
	I0927 01:43:13.437187   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.437194   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:13.437200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:13.437260   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:13.474735   69333 cri.go:89] found id: ""
	I0927 01:43:13.474758   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.474766   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:13.474771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:13.474822   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:13.528726   69333 cri.go:89] found id: ""
	I0927 01:43:13.528754   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.528764   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:13.528771   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:13.528837   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:13.568617   69333 cri.go:89] found id: ""
	I0927 01:43:13.568642   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.568651   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:13.568658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:13.568726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:13.605820   69333 cri.go:89] found id: ""
	I0927 01:43:13.605846   69333 logs.go:276] 0 containers: []
	W0927 01:43:13.605857   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:13.605868   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:13.605883   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:13.682586   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:13.682609   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:13.682624   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:13.764487   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:13.764522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:13.809248   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:13.809280   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:13.861331   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:13.861371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:16.376981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:16.391787   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:16.391842   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:16.432731   69333 cri.go:89] found id: ""
	I0927 01:43:16.432758   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.432767   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:16.432775   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:16.432836   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:16.466769   69333 cri.go:89] found id: ""
	I0927 01:43:16.466798   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.466806   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:16.466812   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:16.466860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:16.501899   69333 cri.go:89] found id: ""
	I0927 01:43:16.501927   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.501940   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:16.501947   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:16.502000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:16.537356   69333 cri.go:89] found id: ""
	I0927 01:43:16.537383   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.537393   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:16.537401   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:16.537460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:16.573910   69333 cri.go:89] found id: ""
	I0927 01:43:16.573937   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.573946   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:16.573951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:16.574003   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:16.617780   69333 cri.go:89] found id: ""
	I0927 01:43:16.617808   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.617818   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:16.617826   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:16.617884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:16.653262   69333 cri.go:89] found id: ""
	I0927 01:43:16.653311   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.653323   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:16.653331   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:16.653394   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:16.689861   69333 cri.go:89] found id: ""
	I0927 01:43:16.689889   69333 logs.go:276] 0 containers: []
	W0927 01:43:16.689898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:16.689909   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:16.689922   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:16.765961   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:16.765986   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:16.766001   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:16.845195   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:16.845227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:16.889159   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:16.889188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:16.945523   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:16.945558   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:13.522444   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.021202   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:15.808665   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.307884   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.043071   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.541709   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:19.461132   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:19.475148   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:19.475234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:19.511487   69333 cri.go:89] found id: ""
	I0927 01:43:19.511509   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.511517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:19.511522   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:19.511580   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:19.545726   69333 cri.go:89] found id: ""
	I0927 01:43:19.545750   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.545756   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:19.545763   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:19.545830   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:19.581287   69333 cri.go:89] found id: ""
	I0927 01:43:19.581310   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.581318   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:19.581323   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:19.581376   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:19.614179   69333 cri.go:89] found id: ""
	I0927 01:43:19.614205   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.614215   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:19.614223   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:19.614286   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:19.648276   69333 cri.go:89] found id: ""
	I0927 01:43:19.648307   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.648318   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:19.648330   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:19.648390   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:19.683051   69333 cri.go:89] found id: ""
	I0927 01:43:19.683083   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.683094   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:19.683114   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:19.683166   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:19.716664   69333 cri.go:89] found id: ""
	I0927 01:43:19.716686   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.716694   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:19.716700   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:19.716745   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:19.758948   69333 cri.go:89] found id: ""
	I0927 01:43:19.758969   69333 logs.go:276] 0 containers: []
	W0927 01:43:19.758976   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:19.758984   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:19.758996   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:19.797751   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:19.797777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:19.853605   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:19.853635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:19.867785   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:19.867815   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:19.950323   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:19.950350   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:19.950363   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:18.520291   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.520845   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.520886   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.808171   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.812047   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:21.043160   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:23.546462   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.555421   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:22.570013   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:22.570077   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:22.605007   69333 cri.go:89] found id: ""
	I0927 01:43:22.605034   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.605055   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:22.605062   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:22.605122   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:22.640350   69333 cri.go:89] found id: ""
	I0927 01:43:22.640381   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.640391   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:22.640406   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:22.640482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:22.677464   69333 cri.go:89] found id: ""
	I0927 01:43:22.677489   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.677499   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:22.677506   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:22.677567   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:22.721978   69333 cri.go:89] found id: ""
	I0927 01:43:22.722017   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.722025   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:22.722032   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:22.722093   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:22.757694   69333 cri.go:89] found id: ""
	I0927 01:43:22.757720   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.757729   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:22.757733   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:22.757781   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:22.793872   69333 cri.go:89] found id: ""
	I0927 01:43:22.793903   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.793912   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:22.793920   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:22.793971   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:22.830620   69333 cri.go:89] found id: ""
	I0927 01:43:22.830652   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.830662   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:22.830669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:22.830732   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:22.867341   69333 cri.go:89] found id: ""
	I0927 01:43:22.867370   69333 logs.go:276] 0 containers: []
	W0927 01:43:22.867381   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:22.867392   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:22.867405   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:22.939592   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:22.939630   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:22.939654   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:23.016407   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:23.016447   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:23.058490   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:23.058522   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:23.109527   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:23.109560   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:25.626109   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:25.645254   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:25.645343   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:25.707951   69333 cri.go:89] found id: ""
	I0927 01:43:25.707979   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.707989   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:25.707997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:25.708057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:25.771210   69333 cri.go:89] found id: ""
	I0927 01:43:25.771234   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.771242   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:25.771248   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:25.771295   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:25.808206   69333 cri.go:89] found id: ""
	I0927 01:43:25.808235   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.808245   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:25.808252   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:25.808311   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:25.842236   69333 cri.go:89] found id: ""
	I0927 01:43:25.842265   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.842275   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:25.842283   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:25.842328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:25.879220   69333 cri.go:89] found id: ""
	I0927 01:43:25.879248   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.879256   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:25.879262   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:25.879333   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:25.913491   69333 cri.go:89] found id: ""
	I0927 01:43:25.913522   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.913532   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:25.913537   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:25.913595   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:25.946867   69333 cri.go:89] found id: ""
	I0927 01:43:25.946887   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.946894   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:25.946899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:25.946943   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:25.983792   69333 cri.go:89] found id: ""
	I0927 01:43:25.983813   69333 logs.go:276] 0 containers: []
	W0927 01:43:25.983820   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:25.983828   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:25.983838   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:26.030169   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:26.030195   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:26.083242   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:26.083276   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:26.097109   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:26.097136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:26.168675   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:26.168703   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:26.168715   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:24.521923   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.020053   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:25.308150   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.308307   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:29.308818   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:26.042436   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.541895   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:30.542444   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.750349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:28.765211   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:28.765269   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:28.804760   69333 cri.go:89] found id: ""
	I0927 01:43:28.804784   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.804792   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:28.804798   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:28.804865   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:28.842576   69333 cri.go:89] found id: ""
	I0927 01:43:28.842597   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.842604   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:28.842612   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:28.842674   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:28.877498   69333 cri.go:89] found id: ""
	I0927 01:43:28.877529   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.877541   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:28.877553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:28.877615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:28.912583   69333 cri.go:89] found id: ""
	I0927 01:43:28.912609   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.912620   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:28.912627   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:28.912689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:28.947995   69333 cri.go:89] found id: ""
	I0927 01:43:28.948019   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.948030   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:28.948037   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:28.948135   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:28.984445   69333 cri.go:89] found id: ""
	I0927 01:43:28.984470   69333 logs.go:276] 0 containers: []
	W0927 01:43:28.984480   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:28.984488   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:28.984551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:29.020345   69333 cri.go:89] found id: ""
	I0927 01:43:29.020374   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.020385   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:29.020392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:29.020451   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:29.056204   69333 cri.go:89] found id: ""
	I0927 01:43:29.056234   69333 logs.go:276] 0 containers: []
	W0927 01:43:29.056245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:29.056257   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:29.056270   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:29.127936   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:29.127963   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:29.127980   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:29.205933   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:29.205981   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.248745   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:29.248777   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:29.302316   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:29.302348   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:31.817566   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:31.831179   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:31.831253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:31.868480   69333 cri.go:89] found id: ""
	I0927 01:43:31.868507   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.868517   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:31.868528   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:31.868588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:31.901656   69333 cri.go:89] found id: ""
	I0927 01:43:31.901684   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.901694   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:31.901701   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:31.901761   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:31.937101   69333 cri.go:89] found id: ""
	I0927 01:43:31.937133   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.937145   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:31.937153   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:31.937210   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:31.970724   69333 cri.go:89] found id: ""
	I0927 01:43:31.970750   69333 logs.go:276] 0 containers: []
	W0927 01:43:31.970761   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:31.970768   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:31.970835   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:32.003704   69333 cri.go:89] found id: ""
	I0927 01:43:32.003736   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.003747   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:32.003754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:32.003813   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:32.038840   69333 cri.go:89] found id: ""
	I0927 01:43:32.038869   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.038879   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:32.038886   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:32.038946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:32.075506   69333 cri.go:89] found id: ""
	I0927 01:43:32.075534   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.075545   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:32.075552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:32.075603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:32.112983   69333 cri.go:89] found id: ""
	I0927 01:43:32.113009   69333 logs.go:276] 0 containers: []
	W0927 01:43:32.113020   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:32.113031   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:32.113046   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:32.168192   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:32.168227   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:32.182702   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:32.182727   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:32.255797   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:32.255824   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:32.255835   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:32.336083   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:32.336115   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:29.022764   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.520495   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.308851   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.807870   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.041600   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.042193   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:34.880981   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:34.894904   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:34.894976   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:34.933459   69333 cri.go:89] found id: ""
	I0927 01:43:34.933482   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.933490   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:34.933498   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:34.933555   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:34.966893   69333 cri.go:89] found id: ""
	I0927 01:43:34.966917   69333 logs.go:276] 0 containers: []
	W0927 01:43:34.966926   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:34.966933   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:34.966992   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:35.002878   69333 cri.go:89] found id: ""
	I0927 01:43:35.002899   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.002907   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:35.002912   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:35.002970   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:35.039871   69333 cri.go:89] found id: ""
	I0927 01:43:35.039898   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.039908   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:35.039915   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:35.039977   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:35.078229   69333 cri.go:89] found id: ""
	I0927 01:43:35.078255   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.078267   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:35.078274   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:35.078342   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:35.114369   69333 cri.go:89] found id: ""
	I0927 01:43:35.114397   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.114408   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:35.114415   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:35.114475   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:35.148072   69333 cri.go:89] found id: ""
	I0927 01:43:35.148100   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.148110   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:35.148117   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:35.148188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:35.184020   69333 cri.go:89] found id: ""
	I0927 01:43:35.184051   69333 logs.go:276] 0 containers: []
	W0927 01:43:35.184062   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:35.184073   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:35.184086   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:35.197332   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:35.197355   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:35.273860   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:35.273889   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:35.273904   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:35.354647   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:35.354680   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:35.392622   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:35.392651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:33.521889   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:36.020067   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.021354   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.808365   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.307251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.541793   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.043418   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.943024   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:37.957265   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:37.957329   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:37.991294   69333 cri.go:89] found id: ""
	I0927 01:43:37.991348   69333 logs.go:276] 0 containers: []
	W0927 01:43:37.991362   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:37.991368   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:37.991421   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:38.026960   69333 cri.go:89] found id: ""
	I0927 01:43:38.026981   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.026990   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:38.026998   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:38.027057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:38.063540   69333 cri.go:89] found id: ""
	I0927 01:43:38.063563   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.063571   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:38.063576   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:38.063627   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:38.099554   69333 cri.go:89] found id: ""
	I0927 01:43:38.099602   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.099613   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:38.099621   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:38.099689   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:38.136576   69333 cri.go:89] found id: ""
	I0927 01:43:38.136604   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.136615   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:38.136623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:38.136676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:38.170411   69333 cri.go:89] found id: ""
	I0927 01:43:38.170441   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.170452   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:38.170458   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:38.170512   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:38.211902   69333 cri.go:89] found id: ""
	I0927 01:43:38.211934   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.211945   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:38.211951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:38.212007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:38.247850   69333 cri.go:89] found id: ""
	I0927 01:43:38.247875   69333 logs.go:276] 0 containers: []
	W0927 01:43:38.247885   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:38.247895   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:38.247913   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:38.329353   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:38.329384   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:38.369114   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:38.369148   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:38.420578   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:38.420613   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:38.434019   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:38.434050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:38.517921   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.018609   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:41.032308   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:41.032370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:41.068491   69333 cri.go:89] found id: ""
	I0927 01:43:41.068518   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.068529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:41.068536   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:41.068597   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:41.106527   69333 cri.go:89] found id: ""
	I0927 01:43:41.106555   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.106565   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:41.106571   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:41.106634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:41.142846   69333 cri.go:89] found id: ""
	I0927 01:43:41.142870   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.142880   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:41.142887   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:41.142949   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:41.187499   69333 cri.go:89] found id: ""
	I0927 01:43:41.187525   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.187536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:41.187544   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:41.187606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:41.226040   69333 cri.go:89] found id: ""
	I0927 01:43:41.226063   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.226070   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:41.226076   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:41.226153   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:41.261399   69333 cri.go:89] found id: ""
	I0927 01:43:41.261429   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.261440   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:41.261446   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:41.261493   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:41.300709   69333 cri.go:89] found id: ""
	I0927 01:43:41.300730   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.300737   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:41.300743   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:41.300799   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:41.335725   69333 cri.go:89] found id: ""
	I0927 01:43:41.335751   69333 logs.go:276] 0 containers: []
	W0927 01:43:41.335759   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:41.335767   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:41.335776   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:41.387756   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:41.387788   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:41.401717   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:41.401743   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:41.479524   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:41.479548   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:41.479562   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:41.559926   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:41.559959   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:40.520642   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.521344   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.307769   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.807328   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.541384   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.548925   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.107615   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:44.122628   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:44.122690   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:44.163496   69333 cri.go:89] found id: ""
	I0927 01:43:44.163521   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.163529   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:44.163541   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:44.163588   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:44.203488   69333 cri.go:89] found id: ""
	I0927 01:43:44.203519   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.203529   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:44.203535   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:44.203600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:44.238111   69333 cri.go:89] found id: ""
	I0927 01:43:44.238141   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.238148   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:44.238154   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:44.238221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:44.272954   69333 cri.go:89] found id: ""
	I0927 01:43:44.272981   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.272991   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:44.272998   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:44.273057   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:44.309700   69333 cri.go:89] found id: ""
	I0927 01:43:44.309719   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.309726   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:44.309731   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:44.309776   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:44.344532   69333 cri.go:89] found id: ""
	I0927 01:43:44.344563   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.344573   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:44.344580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:44.344641   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:44.379354   69333 cri.go:89] found id: ""
	I0927 01:43:44.379380   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.379391   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:44.379399   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:44.379461   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:44.415297   69333 cri.go:89] found id: ""
	I0927 01:43:44.415344   69333 logs.go:276] 0 containers: []
	W0927 01:43:44.415356   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:44.415366   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:44.415381   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:44.468570   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:44.468602   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:44.483419   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:44.483453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:44.560718   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:44.560737   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:44.560753   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:44.641130   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:44.641173   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.188520   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:47.202189   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:47.202262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:47.243051   69333 cri.go:89] found id: ""
	I0927 01:43:47.243075   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.243083   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:47.243089   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:47.243155   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:47.280071   69333 cri.go:89] found id: ""
	I0927 01:43:47.280094   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.280104   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:47.280111   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:47.280170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:47.318458   69333 cri.go:89] found id: ""
	I0927 01:43:47.318482   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.318492   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:47.318499   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:47.318551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:45.023799   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.522945   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:45.307910   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.309781   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.807329   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.041371   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.042307   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.352891   69333 cri.go:89] found id: ""
	I0927 01:43:47.352916   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.352926   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:47.352933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:47.352997   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:47.387534   69333 cri.go:89] found id: ""
	I0927 01:43:47.387560   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.387569   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:47.387578   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:47.387646   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:47.422221   69333 cri.go:89] found id: ""
	I0927 01:43:47.422254   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.422265   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:47.422273   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:47.422330   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:47.459624   69333 cri.go:89] found id: ""
	I0927 01:43:47.459645   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.459653   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:47.459659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:47.459706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:47.494322   69333 cri.go:89] found id: ""
	I0927 01:43:47.494347   69333 logs.go:276] 0 containers: []
	W0927 01:43:47.494355   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:47.494363   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:47.494375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:47.508031   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:47.508056   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:47.583920   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:47.583952   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:47.583968   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:47.665533   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:47.665568   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:47.708423   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:47.708455   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.261602   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:50.275548   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:50.275607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:50.311583   69333 cri.go:89] found id: ""
	I0927 01:43:50.311610   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.311620   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:50.311627   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:50.311687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:50.347686   69333 cri.go:89] found id: ""
	I0927 01:43:50.347709   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.347721   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:50.347729   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:50.347778   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:50.386627   69333 cri.go:89] found id: ""
	I0927 01:43:50.386654   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.386663   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:50.386669   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:50.386719   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:50.421512   69333 cri.go:89] found id: ""
	I0927 01:43:50.421538   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.421547   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:50.421552   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:50.421603   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:50.461849   69333 cri.go:89] found id: ""
	I0927 01:43:50.461872   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.461880   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:50.461885   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:50.461941   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:50.496517   69333 cri.go:89] found id: ""
	I0927 01:43:50.496540   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.496548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:50.496554   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:50.496600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:50.532595   69333 cri.go:89] found id: ""
	I0927 01:43:50.532619   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.532630   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:50.532638   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:50.532687   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:50.573213   69333 cri.go:89] found id: ""
	I0927 01:43:50.573241   69333 logs.go:276] 0 containers: []
	W0927 01:43:50.573252   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:50.573262   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:50.573275   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:50.625600   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:50.625633   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:50.639512   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:50.639535   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:50.708393   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:50.708415   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:50.708436   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:50.789812   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:50.789845   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:50.020837   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:52.021314   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.807713   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:54.308918   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.541348   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.542994   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.335858   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:53.349369   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:53.349441   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:53.386922   69333 cri.go:89] found id: ""
	I0927 01:43:53.386947   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.386955   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:53.386961   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:53.387007   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:53.423614   69333 cri.go:89] found id: ""
	I0927 01:43:53.423640   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.423651   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:53.423658   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:53.423721   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:53.463245   69333 cri.go:89] found id: ""
	I0927 01:43:53.463265   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.463273   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:53.463280   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:53.463344   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:53.502093   69333 cri.go:89] found id: ""
	I0927 01:43:53.502123   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.502133   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:53.502140   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:53.502196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:53.538616   69333 cri.go:89] found id: ""
	I0927 01:43:53.538641   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.538652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:53.538659   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:53.538716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:53.578580   69333 cri.go:89] found id: ""
	I0927 01:43:53.578609   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.578617   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:53.578623   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:53.578685   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:53.615240   69333 cri.go:89] found id: ""
	I0927 01:43:53.615266   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.615275   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:53.615282   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:53.615356   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:53.650987   69333 cri.go:89] found id: ""
	I0927 01:43:53.651011   69333 logs.go:276] 0 containers: []
	W0927 01:43:53.651019   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:53.651028   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:53.651038   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:53.664817   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:53.664841   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:53.737875   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:53.737894   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:53.737909   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:53.827293   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:53.827345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:53.867157   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:53.867188   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:56.423435   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:56.437837   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:56.437912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:56.480328   69333 cri.go:89] found id: ""
	I0927 01:43:56.480349   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.480357   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:56.480364   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:56.480427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:56.520627   69333 cri.go:89] found id: ""
	I0927 01:43:56.520651   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.520660   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:56.520667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:56.520726   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:56.561527   69333 cri.go:89] found id: ""
	I0927 01:43:56.561555   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.561567   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:56.561574   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:56.561634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:56.598751   69333 cri.go:89] found id: ""
	I0927 01:43:56.598783   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.598794   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:56.598801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:56.598861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:56.634378   69333 cri.go:89] found id: ""
	I0927 01:43:56.634410   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.634422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:56.634429   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:56.634489   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:56.669819   69333 cri.go:89] found id: ""
	I0927 01:43:56.669852   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.669863   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:56.669877   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:56.669929   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:56.703715   69333 cri.go:89] found id: ""
	I0927 01:43:56.703740   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.703750   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:56.703757   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:56.703820   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:56.737208   69333 cri.go:89] found id: ""
	I0927 01:43:56.737234   69333 logs.go:276] 0 containers: []
	W0927 01:43:56.737245   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:56.737255   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:56.737269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:56.749933   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:56.749960   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:56.822331   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:56.822353   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:56.822369   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:56.904415   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:56.904454   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:43:56.947108   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:43:56.947136   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:54.521004   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.807935   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.808046   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.041831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.042496   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:00.542924   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:59.500580   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:43:59.523807   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:43:59.523888   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:43:59.562931   69333 cri.go:89] found id: ""
	I0927 01:43:59.562955   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.562963   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:43:59.562968   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:43:59.563013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:43:59.599321   69333 cri.go:89] found id: ""
	I0927 01:43:59.599348   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.599358   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:43:59.599363   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:43:59.599418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:43:59.634404   69333 cri.go:89] found id: ""
	I0927 01:43:59.634431   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.634441   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:43:59.634448   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:43:59.634498   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:43:59.672022   69333 cri.go:89] found id: ""
	I0927 01:43:59.672052   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.672066   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:43:59.672074   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:43:59.672134   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:43:59.704617   69333 cri.go:89] found id: ""
	I0927 01:43:59.704647   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.704657   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:43:59.704664   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:43:59.704712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:43:59.740479   69333 cri.go:89] found id: ""
	I0927 01:43:59.740504   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.740512   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:43:59.740517   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:43:59.740579   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:43:59.777123   69333 cri.go:89] found id: ""
	I0927 01:43:59.777155   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.777166   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:43:59.777174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:43:59.777234   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:43:59.817780   69333 cri.go:89] found id: ""
	I0927 01:43:59.817803   69333 logs.go:276] 0 containers: []
	W0927 01:43:59.817825   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:43:59.817841   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:43:59.817856   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:43:59.831252   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:43:59.831278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:43:59.901912   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:43:59.901936   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:43:59.901949   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:43:59.983001   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:43:59.983034   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:00.030989   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:00.031020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:43:59.020139   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.020925   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.306853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.308075   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.042494   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.043814   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:02.583949   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:02.596723   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:02.596798   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:02.630927   69333 cri.go:89] found id: ""
	I0927 01:44:02.630953   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.630962   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:02.630967   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:02.631012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:02.664156   69333 cri.go:89] found id: ""
	I0927 01:44:02.664186   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.664198   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:02.664205   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:02.664259   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:02.698823   69333 cri.go:89] found id: ""
	I0927 01:44:02.698847   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.698860   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:02.698865   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:02.698913   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:02.736114   69333 cri.go:89] found id: ""
	I0927 01:44:02.736142   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.736154   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:02.736161   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:02.736221   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:02.769739   69333 cri.go:89] found id: ""
	I0927 01:44:02.769763   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.769771   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:02.769785   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:02.769844   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:02.804798   69333 cri.go:89] found id: ""
	I0927 01:44:02.804871   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.804887   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:02.804898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:02.804958   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:02.841197   69333 cri.go:89] found id: ""
	I0927 01:44:02.841224   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.841236   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:02.841243   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:02.841301   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:02.881278   69333 cri.go:89] found id: ""
	I0927 01:44:02.881310   69333 logs.go:276] 0 containers: []
	W0927 01:44:02.881321   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:02.881331   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:02.881345   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:02.935149   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:02.935183   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:02.950245   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:02.950273   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:03.020241   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:03.020263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:03.020277   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:03.104467   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:03.104503   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:05.643070   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:05.656656   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:05.656716   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:05.694022   69333 cri.go:89] found id: ""
	I0927 01:44:05.694045   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.694053   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:05.694059   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:05.694123   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:05.728575   69333 cri.go:89] found id: ""
	I0927 01:44:05.728600   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.728607   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:05.728613   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:05.728667   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:05.768546   69333 cri.go:89] found id: ""
	I0927 01:44:05.768572   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.768583   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:05.768590   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:05.768652   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:05.809504   69333 cri.go:89] found id: ""
	I0927 01:44:05.809527   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.809536   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:05.809543   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:05.809600   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:05.846387   69333 cri.go:89] found id: ""
	I0927 01:44:05.846415   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.846422   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:05.846428   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:05.846479   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:05.879579   69333 cri.go:89] found id: ""
	I0927 01:44:05.879608   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.879619   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:05.879626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:05.879684   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:05.928932   69333 cri.go:89] found id: ""
	I0927 01:44:05.928961   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.928970   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:05.928978   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:05.929037   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:05.986463   69333 cri.go:89] found id: ""
	I0927 01:44:05.986490   69333 logs.go:276] 0 containers: []
	W0927 01:44:05.986499   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:05.986507   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:05.986521   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:06.039984   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:06.040011   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:06.053025   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:06.053051   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:06.127277   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:06.127316   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:06.127330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:06.201473   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:06.201504   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:03.520539   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:06.021584   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:05.808474   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.307407   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:07.542959   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.042223   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:08.739339   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:08.753354   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:08.753418   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:08.788513   69333 cri.go:89] found id: ""
	I0927 01:44:08.788544   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.788556   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:08.788563   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:08.788648   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:08.824615   69333 cri.go:89] found id: ""
	I0927 01:44:08.824642   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.824653   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:08.824661   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:08.824724   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:08.858327   69333 cri.go:89] found id: ""
	I0927 01:44:08.858354   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.858365   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:08.858372   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:08.858430   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:08.896140   69333 cri.go:89] found id: ""
	I0927 01:44:08.896168   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.896175   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:08.896181   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:08.896229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:08.931525   69333 cri.go:89] found id: ""
	I0927 01:44:08.931547   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.931554   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:08.931560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:08.931618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:08.970224   69333 cri.go:89] found id: ""
	I0927 01:44:08.970246   69333 logs.go:276] 0 containers: []
	W0927 01:44:08.970256   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:08.970263   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:08.970331   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:09.007213   69333 cri.go:89] found id: ""
	I0927 01:44:09.007240   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.007248   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:09.007255   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:09.007334   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:09.043078   69333 cri.go:89] found id: ""
	I0927 01:44:09.043111   69333 logs.go:276] 0 containers: []
	W0927 01:44:09.043122   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:09.043132   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:09.043147   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:09.096768   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:09.096801   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:09.110721   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:09.110747   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:09.182966   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:09.182990   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:09.183004   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:09.259497   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:09.259541   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:11.797307   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:11.812141   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:11.812196   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:11.846429   69333 cri.go:89] found id: ""
	I0927 01:44:11.846468   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.846482   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:11.846489   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:11.846598   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:11.885294   69333 cri.go:89] found id: ""
	I0927 01:44:11.885322   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.885333   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:11.885339   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:11.885398   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:11.920856   69333 cri.go:89] found id: ""
	I0927 01:44:11.920884   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.920892   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:11.920898   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:11.920946   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:11.964540   69333 cri.go:89] found id: ""
	I0927 01:44:11.964564   69333 logs.go:276] 0 containers: []
	W0927 01:44:11.964574   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:11.964581   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:11.964634   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:12.000596   69333 cri.go:89] found id: ""
	I0927 01:44:12.000619   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.000629   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:12.000636   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:12.000697   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:12.037773   69333 cri.go:89] found id: ""
	I0927 01:44:12.037808   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.037819   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:12.037831   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:12.037893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:12.074646   69333 cri.go:89] found id: ""
	I0927 01:44:12.074676   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.074687   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:12.074692   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:12.074740   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:12.111771   69333 cri.go:89] found id: ""
	I0927 01:44:12.111802   69333 logs.go:276] 0 containers: []
	W0927 01:44:12.111813   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:12.111824   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:12.111837   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:12.160938   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:12.160971   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:12.175576   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:12.175605   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:12.245227   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:12.245263   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:12.245278   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:12.325161   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:12.325194   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:08.520111   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.520326   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.520755   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:10.808039   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.808843   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.042905   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.542272   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.867795   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:14.881053   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:14.881130   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:14.915193   69333 cri.go:89] found id: ""
	I0927 01:44:14.915224   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.915234   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:14.915241   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:14.915318   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:14.951758   69333 cri.go:89] found id: ""
	I0927 01:44:14.951789   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.951801   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:14.951808   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:14.951860   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:14.987875   69333 cri.go:89] found id: ""
	I0927 01:44:14.987906   69333 logs.go:276] 0 containers: []
	W0927 01:44:14.987917   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:14.987924   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:14.987985   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:15.025780   69333 cri.go:89] found id: ""
	I0927 01:44:15.025810   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.025820   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:15.025828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:15.025884   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:15.062135   69333 cri.go:89] found id: ""
	I0927 01:44:15.062157   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.062165   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:15.062172   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:15.062225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:15.097090   69333 cri.go:89] found id: ""
	I0927 01:44:15.097112   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.097119   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:15.097126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:15.097170   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:15.130528   69333 cri.go:89] found id: ""
	I0927 01:44:15.130552   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.130561   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:15.130569   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:15.130615   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:15.165422   69333 cri.go:89] found id: ""
	I0927 01:44:15.165450   69333 logs.go:276] 0 containers: []
	W0927 01:44:15.165457   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:15.165465   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:15.165474   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:15.214612   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:15.214651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:15.230294   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:15.230318   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:15.303339   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:15.303362   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:15.303375   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:15.382046   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:15.382081   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:14.520833   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.021034   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:15.308397   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.808221   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:16.542334   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:18.543785   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:17.923331   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:17.937693   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:17.937765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:17.972677   69333 cri.go:89] found id: ""
	I0927 01:44:17.972699   69333 logs.go:276] 0 containers: []
	W0927 01:44:17.972707   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:17.972714   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:17.972764   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:18.004818   69333 cri.go:89] found id: ""
	I0927 01:44:18.004846   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.004854   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:18.004860   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:18.004907   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:18.044693   69333 cri.go:89] found id: ""
	I0927 01:44:18.044716   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.044723   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:18.044728   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:18.044772   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:18.079205   69333 cri.go:89] found id: ""
	I0927 01:44:18.079235   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.079244   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:18.079249   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:18.079299   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:18.115272   69333 cri.go:89] found id: ""
	I0927 01:44:18.115322   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.115335   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:18.115343   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:18.115412   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:18.150165   69333 cri.go:89] found id: ""
	I0927 01:44:18.150195   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.150206   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:18.150213   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:18.150275   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:18.184971   69333 cri.go:89] found id: ""
	I0927 01:44:18.184999   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.185009   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:18.185016   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:18.185083   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:18.219955   69333 cri.go:89] found id: ""
	I0927 01:44:18.219985   69333 logs.go:276] 0 containers: []
	W0927 01:44:18.219997   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:18.220008   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:18.220020   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:18.269713   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:18.269748   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:18.285224   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:18.285251   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:18.364887   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:18.364912   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:18.364927   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:18.450667   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:18.450706   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:20.991648   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:21.006472   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:21.006529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:21.043455   69333 cri.go:89] found id: ""
	I0927 01:44:21.043476   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.043486   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:21.043493   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:21.043549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:21.080365   69333 cri.go:89] found id: ""
	I0927 01:44:21.080391   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.080399   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:21.080405   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:21.080449   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:21.117576   69333 cri.go:89] found id: ""
	I0927 01:44:21.117624   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.117636   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:21.117642   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:21.117703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:21.154538   69333 cri.go:89] found id: ""
	I0927 01:44:21.154564   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.154576   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:21.154584   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:21.154638   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:21.190046   69333 cri.go:89] found id: ""
	I0927 01:44:21.190070   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.190080   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:21.190086   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:21.190147   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:21.226383   69333 cri.go:89] found id: ""
	I0927 01:44:21.226407   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.226417   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:21.226424   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:21.226485   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:21.262090   69333 cri.go:89] found id: ""
	I0927 01:44:21.262113   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.262124   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:21.262132   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:21.262188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:21.297675   69333 cri.go:89] found id: ""
	I0927 01:44:21.297697   69333 logs.go:276] 0 containers: []
	W0927 01:44:21.297706   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:21.297716   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:21.297728   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:21.349668   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:21.349705   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:21.364608   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:21.364635   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:21.432570   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:21.432596   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:21.432612   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:21.507616   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:21.507661   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:19.520792   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.521341   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:20.307600   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:22.308557   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.807578   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.041736   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:23.041809   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:25.540974   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:24.054212   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:24.067954   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:24.068014   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:24.107017   69333 cri.go:89] found id: ""
	I0927 01:44:24.107045   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.107056   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:24.107063   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:24.107124   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:24.144373   69333 cri.go:89] found id: ""
	I0927 01:44:24.144398   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.144406   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:24.144411   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:24.144473   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:24.180010   69333 cri.go:89] found id: ""
	I0927 01:44:24.180038   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.180048   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:24.180056   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:24.180118   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:24.214387   69333 cri.go:89] found id: ""
	I0927 01:44:24.214413   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.214421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:24.214426   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:24.214472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:24.252597   69333 cri.go:89] found id: ""
	I0927 01:44:24.252623   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.252631   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:24.252643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:24.252705   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.292044   69333 cri.go:89] found id: ""
	I0927 01:44:24.292072   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.292082   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:24.292089   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:24.292158   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:24.329899   69333 cri.go:89] found id: ""
	I0927 01:44:24.329924   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.329934   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:24.329940   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:24.329998   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:24.367964   69333 cri.go:89] found id: ""
	I0927 01:44:24.367989   69333 logs.go:276] 0 containers: []
	W0927 01:44:24.368000   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:24.368010   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:24.368025   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:24.384151   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:24.384184   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:24.456916   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:24.456940   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:24.456958   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:24.539362   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:24.539399   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:24.578384   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:24.578411   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.132700   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:27.146218   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.146294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.180958   69333 cri.go:89] found id: ""
	I0927 01:44:27.180984   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.180992   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:27.180997   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.181043   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.215213   69333 cri.go:89] found id: ""
	I0927 01:44:27.215236   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.215243   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:27.215249   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.215293   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.258192   69333 cri.go:89] found id: ""
	I0927 01:44:27.258216   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.258226   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:27.258233   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.258289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.292717   69333 cri.go:89] found id: ""
	I0927 01:44:27.292742   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.292753   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:27.292760   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.292818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.328038   69333 cri.go:89] found id: ""
	I0927 01:44:27.328066   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.328076   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:27.328083   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.328152   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:24.021885   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:26.520726   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.308923   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:29.807825   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.542683   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.042293   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.363513   69333 cri.go:89] found id: ""
	I0927 01:44:27.363539   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.363548   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:27.363553   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.363610   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.402201   69333 cri.go:89] found id: ""
	I0927 01:44:27.402223   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.402231   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:27.402237   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.402290   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.436952   69333 cri.go:89] found id: ""
	I0927 01:44:27.436979   69333 logs.go:276] 0 containers: []
	W0927 01:44:27.436987   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:27.436995   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.437009   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:27.487908   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:27.487938   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:27.502170   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:27.502199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:27.583909   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:27.583931   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:27.583943   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:27.660248   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:27.660286   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:30.201211   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:30.214276   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:30.214350   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:30.252445   69333 cri.go:89] found id: ""
	I0927 01:44:30.252474   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.252484   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:30.252490   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:30.252538   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:30.287574   69333 cri.go:89] found id: ""
	I0927 01:44:30.287603   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.287614   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:30.287621   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:30.287693   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:30.324674   69333 cri.go:89] found id: ""
	I0927 01:44:30.324699   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.324711   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:30.324718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:30.324779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:30.360493   69333 cri.go:89] found id: ""
	I0927 01:44:30.360521   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.360531   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:30.360539   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:30.360640   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:30.396219   69333 cri.go:89] found id: ""
	I0927 01:44:30.396252   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.396263   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:30.396270   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:30.396328   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:30.431524   69333 cri.go:89] found id: ""
	I0927 01:44:30.431546   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.431558   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:30.431564   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:30.431607   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:30.465887   69333 cri.go:89] found id: ""
	I0927 01:44:30.465915   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.465926   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:30.465933   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:30.466000   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:30.501364   69333 cri.go:89] found id: ""
	I0927 01:44:30.501391   69333 logs.go:276] 0 containers: []
	W0927 01:44:30.501402   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:30.501411   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:30.501425   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:30.556344   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:30.556377   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:30.572619   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:30.572649   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:30.645996   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:30.646020   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:30.646032   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:30.737458   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:30.737531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:28.521312   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:30.521421   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.020699   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:31.807949   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.809414   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:32.045244   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:34.542035   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:33.284306   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:33.298164   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:33.298224   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:33.334599   69333 cri.go:89] found id: ""
	I0927 01:44:33.334625   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.334634   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:33.334654   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:33.334718   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:33.369006   69333 cri.go:89] found id: ""
	I0927 01:44:33.369034   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.369044   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:33.369051   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:33.369119   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:33.407875   69333 cri.go:89] found id: ""
	I0927 01:44:33.407904   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.407912   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:33.407918   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:33.407974   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:33.441048   69333 cri.go:89] found id: ""
	I0927 01:44:33.441083   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.441094   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:33.441101   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:33.441156   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:33.478458   69333 cri.go:89] found id: ""
	I0927 01:44:33.478503   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.478515   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:33.478522   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:33.478586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:33.513756   69333 cri.go:89] found id: ""
	I0927 01:44:33.513784   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.513795   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:33.513802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:33.513862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:33.554351   69333 cri.go:89] found id: ""
	I0927 01:44:33.554392   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.554403   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:33.554410   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:33.554472   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:33.588484   69333 cri.go:89] found id: ""
	I0927 01:44:33.588512   69333 logs.go:276] 0 containers: []
	W0927 01:44:33.588533   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:33.588544   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:33.588559   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:33.665735   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:33.665775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:33.704654   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:33.704687   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:33.755444   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:33.755475   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:33.770069   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:33.770095   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:33.841531   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.341963   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:36.355219   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:36.355294   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:36.395149   69333 cri.go:89] found id: ""
	I0927 01:44:36.395185   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.395196   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:36.395203   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:36.395262   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:36.434620   69333 cri.go:89] found id: ""
	I0927 01:44:36.434649   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.434661   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:36.434667   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:36.434729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:36.468328   69333 cri.go:89] found id: ""
	I0927 01:44:36.468349   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.468357   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:36.468362   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:36.468427   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:36.506386   69333 cri.go:89] found id: ""
	I0927 01:44:36.506413   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.506421   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:36.506427   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:36.506482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:36.546583   69333 cri.go:89] found id: ""
	I0927 01:44:36.546607   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.546614   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:36.546620   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:36.546665   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:36.581694   69333 cri.go:89] found id: ""
	I0927 01:44:36.581721   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.581730   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:36.581737   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:36.581782   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:36.617775   69333 cri.go:89] found id: ""
	I0927 01:44:36.617799   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.617807   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:36.617813   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:36.617877   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:36.654443   69333 cri.go:89] found id: ""
	I0927 01:44:36.654470   69333 logs.go:276] 0 containers: []
	W0927 01:44:36.654478   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:36.654486   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:36.654496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:36.705787   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:36.705817   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:36.720643   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:36.720677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:36.800037   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:36.800061   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:36.800091   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:36.886845   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:36.886884   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:35.023634   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.520794   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:36.307516   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:38.307899   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:37.041620   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.044257   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:39.429349   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:39.442899   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:39.442973   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:39.481752   69333 cri.go:89] found id: ""
	I0927 01:44:39.481782   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.481793   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:39.481799   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:39.481858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:39.516074   69333 cri.go:89] found id: ""
	I0927 01:44:39.516103   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.516114   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:39.516130   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:39.516188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.563351   69333 cri.go:89] found id: ""
	I0927 01:44:39.563375   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.563386   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:39.563392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.563455   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.601417   69333 cri.go:89] found id: ""
	I0927 01:44:39.601445   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.601455   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:39.601469   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.601529   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.634537   69333 cri.go:89] found id: ""
	I0927 01:44:39.634565   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.634576   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:39.634582   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.634642   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.668910   69333 cri.go:89] found id: ""
	I0927 01:44:39.668937   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.668948   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:39.668955   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.669013   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.701992   69333 cri.go:89] found id: ""
	I0927 01:44:39.702014   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.702021   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:39.702027   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.702074   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.741579   69333 cri.go:89] found id: ""
	I0927 01:44:39.741601   69333 logs.go:276] 0 containers: []
	W0927 01:44:39.741610   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:39.741618   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:39.741627   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:39.806476   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.806510   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.820228   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:39.820255   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:39.893137   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:39.893167   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.893181   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.974477   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.974514   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:40.021226   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.521217   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:40.309154   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.808724   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:41.542308   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:44.042015   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:42.517449   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:42.532200   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:42.532266   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:42.568872   69333 cri.go:89] found id: ""
	I0927 01:44:42.568901   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.568911   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:42.568919   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:42.568980   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:42.605069   69333 cri.go:89] found id: ""
	I0927 01:44:42.605220   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.605251   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:42.605261   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:42.605335   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:42.641637   69333 cri.go:89] found id: ""
	I0927 01:44:42.641665   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.641673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:42.641680   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:42.641742   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:42.677333   69333 cri.go:89] found id: ""
	I0927 01:44:42.677361   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.677376   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:42.677382   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:42.677439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:42.712456   69333 cri.go:89] found id: ""
	I0927 01:44:42.712484   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.712495   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:42.712501   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:42.712565   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:42.745109   69333 cri.go:89] found id: ""
	I0927 01:44:42.745140   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.745150   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:42.745157   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:42.745226   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:42.779427   69333 cri.go:89] found id: ""
	I0927 01:44:42.779449   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.779457   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:42.779462   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:42.779508   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:42.823920   69333 cri.go:89] found id: ""
	I0927 01:44:42.823946   69333 logs.go:276] 0 containers: []
	W0927 01:44:42.823954   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:42.823963   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:42.823972   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:42.881345   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:42.881380   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:42.896076   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:42.896100   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:42.971775   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:42.971796   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:42.971809   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:43.054461   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:43.054494   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.596681   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:45.610817   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:45.610882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:45.647628   69333 cri.go:89] found id: ""
	I0927 01:44:45.647654   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.647662   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:45.647668   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:45.647715   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:45.685480   69333 cri.go:89] found id: ""
	I0927 01:44:45.685507   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.685514   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:45.685520   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:45.685573   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:45.721601   69333 cri.go:89] found id: ""
	I0927 01:44:45.721624   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.721632   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:45.721637   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:45.721700   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:45.756763   69333 cri.go:89] found id: ""
	I0927 01:44:45.756788   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.756796   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:45.756802   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:45.756858   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:45.792891   69333 cri.go:89] found id: ""
	I0927 01:44:45.792917   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.792927   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:45.792934   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:45.792996   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:45.828716   69333 cri.go:89] found id: ""
	I0927 01:44:45.828739   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.828747   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:45.828753   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:45.828807   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:45.868813   69333 cri.go:89] found id: ""
	I0927 01:44:45.868840   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.868848   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:45.868853   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:45.868905   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:45.907281   69333 cri.go:89] found id: ""
	I0927 01:44:45.907327   69333 logs.go:276] 0 containers: []
	W0927 01:44:45.907341   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:45.907352   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:45.907371   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:45.958539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:45.958574   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:45.972540   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:45.972567   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:46.046083   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:46.046124   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:46.046141   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:46.124313   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:46.124349   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:45.021100   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.021435   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:45.307916   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:47.807187   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:49.809212   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:46.042143   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.541984   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:50.542678   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:48.673701   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:48.687673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:48.687744   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:48.722269   69333 cri.go:89] found id: ""
	I0927 01:44:48.722291   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.722302   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:48.722308   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:48.722370   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:48.758297   69333 cri.go:89] found id: ""
	I0927 01:44:48.758318   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.758326   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:48.758331   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:48.758377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:48.792706   69333 cri.go:89] found id: ""
	I0927 01:44:48.792730   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.792738   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:48.792744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:48.792792   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:48.827015   69333 cri.go:89] found id: ""
	I0927 01:44:48.827035   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.827047   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:48.827052   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:48.827095   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:48.862538   69333 cri.go:89] found id: ""
	I0927 01:44:48.862564   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.862572   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:48.862577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:48.862632   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:48.896118   69333 cri.go:89] found id: ""
	I0927 01:44:48.896144   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.896154   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:48.896166   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:48.896225   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:48.932483   69333 cri.go:89] found id: ""
	I0927 01:44:48.932511   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.932519   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:48.932524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:48.932576   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:48.971864   69333 cri.go:89] found id: ""
	I0927 01:44:48.971890   69333 logs.go:276] 0 containers: []
	W0927 01:44:48.971898   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:48.971906   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:48.971919   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:49.028163   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:49.028199   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:49.042780   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:49.042805   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:49.116454   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:49.116476   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:49.116491   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.196048   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:49.196084   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:51.735108   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:51.749191   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:51.749258   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:51.784776   69333 cri.go:89] found id: ""
	I0927 01:44:51.784804   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.784815   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:51.784823   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:51.784880   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:51.822807   69333 cri.go:89] found id: ""
	I0927 01:44:51.822836   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.822847   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:51.822854   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:51.822912   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:51.858700   69333 cri.go:89] found id: ""
	I0927 01:44:51.858726   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.858737   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:51.858744   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:51.858812   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:51.894945   69333 cri.go:89] found id: ""
	I0927 01:44:51.894968   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.894975   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:51.894980   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:51.895025   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:51.939475   69333 cri.go:89] found id: ""
	I0927 01:44:51.939503   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.939518   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:51.939524   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:51.939569   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:51.982626   69333 cri.go:89] found id: ""
	I0927 01:44:51.982654   69333 logs.go:276] 0 containers: []
	W0927 01:44:51.982665   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:51.982673   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:51.982731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:52.050446   69333 cri.go:89] found id: ""
	I0927 01:44:52.050473   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.050483   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:52.050490   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:52.050549   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:52.092637   69333 cri.go:89] found id: ""
	I0927 01:44:52.092666   69333 logs.go:276] 0 containers: []
	W0927 01:44:52.092676   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:52.092686   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:52.092700   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:52.132135   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:52.132165   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:52.186537   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:52.186572   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:52.200001   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:52.200027   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:52.282068   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:52.282093   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:52.282108   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:49.521281   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.021229   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:52.308560   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.309001   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:53.042624   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:55.043212   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:54.866565   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:54.880400   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:54.880460   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:54.918963   69333 cri.go:89] found id: ""
	I0927 01:44:54.919004   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.919027   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:54.919036   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:54.919107   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:54.959918   69333 cri.go:89] found id: ""
	I0927 01:44:54.959947   69333 logs.go:276] 0 containers: []
	W0927 01:44:54.959958   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:54.959965   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:54.960026   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:55.004348   69333 cri.go:89] found id: ""
	I0927 01:44:55.004370   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.004378   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:55.004392   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:55.004446   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:55.045190   69333 cri.go:89] found id: ""
	I0927 01:44:55.045213   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.045220   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:55.045225   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:55.045278   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:55.087638   69333 cri.go:89] found id: ""
	I0927 01:44:55.087663   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.087671   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:55.087677   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:55.087739   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:55.126899   69333 cri.go:89] found id: ""
	I0927 01:44:55.126932   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.126943   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:55.126951   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:55.127012   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:55.167593   69333 cri.go:89] found id: ""
	I0927 01:44:55.167624   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.167635   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:55.167643   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:55.167706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:55.208362   69333 cri.go:89] found id: ""
	I0927 01:44:55.208388   69333 logs.go:276] 0 containers: []
	W0927 01:44:55.208399   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:55.208409   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:55.208424   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:55.247198   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:55.247221   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:55.299408   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:55.299443   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:55.315745   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:55.315775   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:55.387499   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:55.387523   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:55.387539   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:54.021502   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.520627   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:56.807487   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:58.807902   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.541517   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:59.542233   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:57.968863   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:57.987921   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:57.987988   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:58.036770   69333 cri.go:89] found id: ""
	I0927 01:44:58.036802   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.036813   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:44:58.036824   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:58.036878   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:58.072461   69333 cri.go:89] found id: ""
	I0927 01:44:58.072484   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.072492   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:44:58.072499   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:58.072551   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:58.107247   69333 cri.go:89] found id: ""
	I0927 01:44:58.107273   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.107284   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:44:58.107290   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:58.107365   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:58.149050   69333 cri.go:89] found id: ""
	I0927 01:44:58.149080   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.149091   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:44:58.149099   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:58.149162   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:58.188167   69333 cri.go:89] found id: ""
	I0927 01:44:58.188198   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.188209   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:44:58.188217   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:58.188283   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:58.224291   69333 cri.go:89] found id: ""
	I0927 01:44:58.224319   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.224329   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:44:58.224337   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:58.224401   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:58.258786   69333 cri.go:89] found id: ""
	I0927 01:44:58.258813   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.258822   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:44:58.258828   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:58.258885   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:58.298310   69333 cri.go:89] found id: ""
	I0927 01:44:58.298338   69333 logs.go:276] 0 containers: []
	W0927 01:44:58.298349   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:44:58.298359   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:44:58.298373   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:58.340299   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:58.340330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.395097   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:58.395130   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:58.410653   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:58.410677   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:44:58.479437   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:44:58.479459   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:58.479470   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.057473   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:01.071746   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:01.071818   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:01.112652   69333 cri.go:89] found id: ""
	I0927 01:45:01.112676   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.112684   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:01.112690   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:01.112735   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:01.146071   69333 cri.go:89] found id: ""
	I0927 01:45:01.146100   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.146111   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:01.146119   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:01.146188   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:01.188640   69333 cri.go:89] found id: ""
	I0927 01:45:01.188663   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.188673   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:01.188679   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:01.188743   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:01.225024   69333 cri.go:89] found id: ""
	I0927 01:45:01.225050   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.225060   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:01.225067   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:01.225128   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:01.262459   69333 cri.go:89] found id: ""
	I0927 01:45:01.262487   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.262498   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:01.262505   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:01.262560   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:01.298567   69333 cri.go:89] found id: ""
	I0927 01:45:01.298588   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.298597   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:01.298603   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:01.298647   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:01.335051   69333 cri.go:89] found id: ""
	I0927 01:45:01.335084   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.335094   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:01.335100   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:01.335149   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:01.371187   69333 cri.go:89] found id: ""
	I0927 01:45:01.371217   69333 logs.go:276] 0 containers: []
	W0927 01:45:01.371227   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:01.371237   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:01.371252   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:01.385163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:01.385189   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:01.457256   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:01.457298   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:01.457313   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:01.537788   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:01.537819   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:01.580645   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:01.580672   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:44:58.521367   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.020826   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.021213   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:00.808021   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:03.307242   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:01.542831   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.042010   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:04.131877   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:04.145175   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:04.145248   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:04.179508   69333 cri.go:89] found id: ""
	I0927 01:45:04.179535   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.179545   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:04.179552   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:04.179612   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:04.213497   69333 cri.go:89] found id: ""
	I0927 01:45:04.213533   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.213544   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:04.213551   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:04.213606   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:04.249708   69333 cri.go:89] found id: ""
	I0927 01:45:04.249737   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.249747   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:04.249754   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:04.249824   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:04.288283   69333 cri.go:89] found id: ""
	I0927 01:45:04.288306   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.288314   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:04.288319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:04.288368   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:04.325515   69333 cri.go:89] found id: ""
	I0927 01:45:04.325539   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.325549   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:04.325560   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:04.325618   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:04.363485   69333 cri.go:89] found id: ""
	I0927 01:45:04.363511   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.363521   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:04.363528   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:04.363586   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:04.398834   69333 cri.go:89] found id: ""
	I0927 01:45:04.398863   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.398875   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:04.398882   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:04.398948   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:04.433408   69333 cri.go:89] found id: ""
	I0927 01:45:04.433435   69333 logs.go:276] 0 containers: []
	W0927 01:45:04.433443   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:04.433451   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:04.433461   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:04.485354   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:04.485392   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:04.499007   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:04.499031   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:04.569376   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:04.569405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:04.569420   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:04.646614   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:04.646651   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.186491   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:07.200510   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:07.200575   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:07.239519   69333 cri.go:89] found id: ""
	I0927 01:45:07.239542   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.239553   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:07.239562   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:07.239751   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:07.276820   69333 cri.go:89] found id: ""
	I0927 01:45:07.276854   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.276863   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:07.276870   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:07.276932   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:07.312580   69333 cri.go:89] found id: ""
	I0927 01:45:07.312604   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.312613   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:07.312619   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:07.312676   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:05.520930   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.020001   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:05.807739   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.807914   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:06.042390   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:08.542149   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.542438   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:07.350763   69333 cri.go:89] found id: ""
	I0927 01:45:07.350788   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.350799   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:07.350806   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:07.350861   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:07.385347   69333 cri.go:89] found id: ""
	I0927 01:45:07.385376   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.385383   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:07.385389   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:07.385439   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:07.420665   69333 cri.go:89] found id: ""
	I0927 01:45:07.420696   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.420708   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:07.420718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:07.420768   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:07.453707   69333 cri.go:89] found id: ""
	I0927 01:45:07.453737   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.453746   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:07.453752   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:07.453806   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:07.489467   69333 cri.go:89] found id: ""
	I0927 01:45:07.489497   69333 logs.go:276] 0 containers: []
	W0927 01:45:07.489508   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:07.489520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:07.489531   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:07.569464   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:07.569496   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:07.609123   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:07.609160   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:07.659556   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:07.659590   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:07.673163   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:07.673191   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:07.751340   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.252511   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:10.266651   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:10.266706   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:10.304131   69333 cri.go:89] found id: ""
	I0927 01:45:10.304160   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.304171   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:10.304178   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:10.304243   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:10.339267   69333 cri.go:89] found id: ""
	I0927 01:45:10.339295   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.339321   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:10.339329   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:10.339397   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:10.376268   69333 cri.go:89] found id: ""
	I0927 01:45:10.376298   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.376308   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:10.376319   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:10.376380   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:10.413944   69333 cri.go:89] found id: ""
	I0927 01:45:10.413970   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.413978   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:10.413984   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:10.414033   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:10.449205   69333 cri.go:89] found id: ""
	I0927 01:45:10.449226   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.449234   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:10.449240   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:10.449289   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:10.487927   69333 cri.go:89] found id: ""
	I0927 01:45:10.487947   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.487955   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:10.487961   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:10.488018   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:10.525062   69333 cri.go:89] found id: ""
	I0927 01:45:10.525085   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.525095   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:10.525102   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:10.525163   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:10.560718   69333 cri.go:89] found id: ""
	I0927 01:45:10.560768   69333 logs.go:276] 0 containers: []
	W0927 01:45:10.560779   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:10.560790   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:10.560803   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:10.641755   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:10.641781   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:10.641796   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:10.719775   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:10.719807   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:10.761952   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:10.761978   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:10.815296   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:10.815330   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:10.023849   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.520577   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:10.307967   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:12.807872   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:14.808602   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:13.041469   69234 pod_ready.go:103] pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:15.036533   69234 pod_ready.go:82] duration metric: took 4m0.000873058s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" ...
	E0927 01:45:15.036568   69234 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k8mdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:45:15.036588   69234 pod_ready.go:39] duration metric: took 4m6.530278971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:15.036645   69234 kubeadm.go:597] duration metric: took 4m16.375010355s to restartPrimaryControlPlane
	W0927 01:45:15.036713   69234 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:15.036743   69234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:13.330300   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:13.343840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:13.343893   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:13.378904   69333 cri.go:89] found id: ""
	I0927 01:45:13.378933   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.378944   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:13.378952   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:13.379010   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:13.417375   69333 cri.go:89] found id: ""
	I0927 01:45:13.417403   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.417415   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:13.417422   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:13.417482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:13.456265   69333 cri.go:89] found id: ""
	I0927 01:45:13.456291   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.456302   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:13.456310   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:13.456358   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:13.502205   69333 cri.go:89] found id: ""
	I0927 01:45:13.502229   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.502240   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:13.502247   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:13.502310   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:13.543617   69333 cri.go:89] found id: ""
	I0927 01:45:13.543642   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.543652   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:13.543660   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:13.543723   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:13.580268   69333 cri.go:89] found id: ""
	I0927 01:45:13.580295   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.580305   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:13.580313   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:13.580374   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:13.616681   69333 cri.go:89] found id: ""
	I0927 01:45:13.616705   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.616713   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:13.616718   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:13.616765   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:13.653389   69333 cri.go:89] found id: ""
	I0927 01:45:13.653412   69333 logs.go:276] 0 containers: []
	W0927 01:45:13.653420   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:13.653430   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:13.653442   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:13.666511   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:13.666534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:13.742282   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:13.742300   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:13.742311   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:13.825800   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:13.825836   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:13.876345   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:13.876376   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.429245   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:16.443286   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:16.443366   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:16.481601   69333 cri.go:89] found id: ""
	I0927 01:45:16.481626   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.481637   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:16.481645   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:16.481703   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:16.513626   69333 cri.go:89] found id: ""
	I0927 01:45:16.513652   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.513659   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:16.513665   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:16.513710   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:16.552531   69333 cri.go:89] found id: ""
	I0927 01:45:16.552565   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.552574   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:16.552580   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:16.552636   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:16.587252   69333 cri.go:89] found id: ""
	I0927 01:45:16.587282   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.587294   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:16.587316   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:16.587377   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:16.628376   69333 cri.go:89] found id: ""
	I0927 01:45:16.628401   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.628410   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:16.628417   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:16.628482   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:16.669603   69333 cri.go:89] found id: ""
	I0927 01:45:16.669639   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.669651   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:16.669658   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:16.669731   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:16.705581   69333 cri.go:89] found id: ""
	I0927 01:45:16.705607   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.705618   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:16.705626   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:16.705682   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:16.740710   69333 cri.go:89] found id: ""
	I0927 01:45:16.740735   69333 logs.go:276] 0 containers: []
	W0927 01:45:16.740743   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:16.740759   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:16.740771   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:16.791025   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:16.791060   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:16.805990   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:16.806023   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:16.878313   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:16.878331   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:16.878346   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:16.966228   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:16.966269   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:14.521852   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:16.522127   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:17.307853   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.308018   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:19.512044   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:19.526801   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:19.526862   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:19.562063   69333 cri.go:89] found id: ""
	I0927 01:45:19.562089   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.562098   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:19.562104   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:19.562159   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:19.598600   69333 cri.go:89] found id: ""
	I0927 01:45:19.598626   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.598634   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:19.598642   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:19.598712   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:19.632544   69333 cri.go:89] found id: ""
	I0927 01:45:19.632564   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.632572   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:19.632577   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:19.632635   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:19.671676   69333 cri.go:89] found id: ""
	I0927 01:45:19.671703   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.671713   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:19.671721   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:19.671779   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:19.710321   69333 cri.go:89] found id: ""
	I0927 01:45:19.710351   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.710362   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:19.710370   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:19.710438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:19.746252   69333 cri.go:89] found id: ""
	I0927 01:45:19.746277   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.746288   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:19.746295   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:19.746354   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:19.783089   69333 cri.go:89] found id: ""
	I0927 01:45:19.783112   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.783121   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:19.783126   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:19.783189   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:19.821090   69333 cri.go:89] found id: ""
	I0927 01:45:19.821117   69333 logs.go:276] 0 containers: []
	W0927 01:45:19.821126   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:19.821134   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:19.821145   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:19.873539   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:19.873575   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:19.888446   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:19.888471   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:19.958009   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:19.958034   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:19.958050   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:20.037552   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:20.037587   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:19.022216   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.520606   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:21.808178   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:23.808273   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:22.579288   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:22.592789   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:45:22.592846   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:45:22.628148   69333 cri.go:89] found id: ""
	I0927 01:45:22.628178   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.628186   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:45:22.628193   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:45:22.628240   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:45:22.664162   69333 cri.go:89] found id: ""
	I0927 01:45:22.664186   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.664194   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:45:22.664200   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:45:22.664253   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:45:22.702077   69333 cri.go:89] found id: ""
	I0927 01:45:22.702104   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.702115   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:45:22.702123   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:45:22.702183   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:45:22.739657   69333 cri.go:89] found id: ""
	I0927 01:45:22.739690   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.739700   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:45:22.739708   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:45:22.739773   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:45:22.774109   69333 cri.go:89] found id: ""
	I0927 01:45:22.774137   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.774148   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:45:22.774174   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:45:22.774229   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:45:22.809648   69333 cri.go:89] found id: ""
	I0927 01:45:22.809671   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.809678   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:45:22.809684   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:45:22.809729   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:45:22.842598   69333 cri.go:89] found id: ""
	I0927 01:45:22.842620   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.842627   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:45:22.842632   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:45:22.842677   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:45:22.877336   69333 cri.go:89] found id: ""
	I0927 01:45:22.877364   69333 logs.go:276] 0 containers: []
	W0927 01:45:22.877374   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:45:22.877382   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:45:22.877393   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:45:22.930364   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:45:22.930395   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:45:22.944174   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:45:22.944200   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:45:23.025495   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:45:23.025520   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:45:23.025534   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:45:23.101813   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:45:23.101850   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:45:25.644577   69333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:45:25.657820   69333 kubeadm.go:597] duration metric: took 4m3.277962916s to restartPrimaryControlPlane
	W0927 01:45:25.657898   69333 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:45:25.657929   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:45:26.111439   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:26.128279   69333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:26.138354   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:26.148116   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:26.148132   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:26.148170   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:26.157965   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:26.158012   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:26.168349   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:26.177624   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:26.177692   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:26.187584   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.196800   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:26.196856   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:26.205894   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:26.215316   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:26.215365   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:26.224989   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:26.299149   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:45:26.299261   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:26.451113   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:26.451282   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:26.451457   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:45:26.637960   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:26.640682   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:26.640782   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:26.640865   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:26.640972   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:26.641099   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:26.641233   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:26.641317   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:26.641425   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:26.641525   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:26.641633   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:26.641901   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:26.642000   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:26.642080   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:26.782585   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:27.008743   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:27.103701   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:27.217999   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:27.238810   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:27.240191   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:27.240240   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:27.375215   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:23.521301   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.020002   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.021215   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:26.306744   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:28.308577   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:27.376992   69333 out.go:235]   - Booting up control plane ...
	I0927 01:45:27.377123   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:27.386897   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:27.387959   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:27.388954   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:27.392182   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:45:30.520717   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.019981   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:30.808251   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:33.307139   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.020640   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.520220   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:35.307871   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:37.808604   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:41.262067   69234 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.225299595s)
	I0927 01:45:41.262142   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:45:41.294256   69234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:45:41.304403   69234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:45:41.314288   69234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:45:41.314310   69234 kubeadm.go:157] found existing configuration files:
	
	I0927 01:45:41.314357   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:45:41.323280   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:45:41.323335   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:45:41.332637   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:45:41.341492   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:45:41.341552   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:45:41.352259   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.361190   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:45:41.361244   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:45:41.370863   69234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:45:41.379674   69234 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:45:41.379735   69234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:45:41.389169   69234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:45:41.434391   69234 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:45:41.434565   69234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:45:41.537712   69234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:45:41.537813   69234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:45:41.537951   69234 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:45:41.546906   69234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:45:41.548799   69234 out.go:235]   - Generating certificates and keys ...
	I0927 01:45:41.548882   69234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:45:41.548959   69234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:45:41.549049   69234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:45:41.549133   69234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:45:41.549239   69234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:45:41.549328   69234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:45:41.549433   69234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:45:41.549531   69234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:45:41.549619   69234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:45:41.549691   69234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:45:41.549741   69234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:45:41.549813   69234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:45:41.594579   69234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:45:41.703970   69234 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:45:41.813013   69234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:45:41.875564   69234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:45:42.025627   69234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:45:42.026325   69234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:45:42.028784   69234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:45:39.521118   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.020563   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:40.307764   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.307974   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:44.808238   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:42.030464   69234 out.go:235]   - Booting up control plane ...
	I0927 01:45:42.030566   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:45:42.030674   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:45:42.031152   69234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:45:42.050207   69234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:45:42.058709   69234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:45:42.058766   69234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:45:42.192498   69234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:45:42.192628   69234 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:45:42.694670   69234 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.189114ms
	I0927 01:45:42.694812   69234 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:45:48.195975   69234 kubeadm.go:310] [api-check] The API server is healthy after 5.501110293s
	I0927 01:45:48.210406   69234 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:45:48.231678   69234 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:45:48.257669   69234 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:45:48.257859   69234 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-245911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:45:48.271429   69234 kubeadm.go:310] [bootstrap-token] Using token: bqds0t.3lt1vhl3zjbrkom6
	I0927 01:45:44.021019   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:46.520158   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:48.272667   69234 out.go:235]   - Configuring RBAC rules ...
	I0927 01:45:48.272775   69234 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:45:48.278773   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:45:48.290868   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:45:48.297879   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:45:48.302011   69234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:45:48.306217   69234 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:45:48.604161   69234 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:45:49.041505   69234 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:45:49.604127   69234 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:45:49.604867   69234 kubeadm.go:310] 
	I0927 01:45:49.604981   69234 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:45:49.605008   69234 kubeadm.go:310] 
	I0927 01:45:49.605136   69234 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:45:49.605147   69234 kubeadm.go:310] 
	I0927 01:45:49.605188   69234 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:45:49.605266   69234 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:45:49.605363   69234 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:45:49.605373   69234 kubeadm.go:310] 
	I0927 01:45:49.605446   69234 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:45:49.605455   69234 kubeadm.go:310] 
	I0927 01:45:49.605524   69234 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:45:49.605537   69234 kubeadm.go:310] 
	I0927 01:45:49.605612   69234 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:45:49.605725   69234 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:45:49.605826   69234 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:45:49.605836   69234 kubeadm.go:310] 
	I0927 01:45:49.605913   69234 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:45:49.606010   69234 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:45:49.606032   69234 kubeadm.go:310] 
	I0927 01:45:49.606130   69234 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606252   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:45:49.606276   69234 kubeadm.go:310] 	--control-plane 
	I0927 01:45:49.606282   69234 kubeadm.go:310] 
	I0927 01:45:49.606404   69234 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:45:49.606421   69234 kubeadm.go:310] 
	I0927 01:45:49.606546   69234 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bqds0t.3lt1vhl3zjbrkom6 \
	I0927 01:45:49.606692   69234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:45:49.607952   69234 kubeadm.go:310] W0927 01:45:41.410128    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608322   69234 kubeadm.go:310] W0927 01:45:41.412009    2534 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:45:49.608494   69234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:45:49.608518   69234 cni.go:84] Creating CNI manager for ""
	I0927 01:45:49.608527   69234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:45:49.610175   69234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:45:47.307006   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.307374   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:49.611562   69234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:45:49.622683   69234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:45:49.642326   69234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:45:49.642366   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:49.642393   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-245911 minikube.k8s.io/updated_at=2024_09_27T01_45_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=embed-certs-245911 minikube.k8s.io/primary=true
	I0927 01:45:49.677602   69234 ops.go:34] apiserver oom_adj: -16
	I0927 01:45:49.854320   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:50.355392   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:48.520718   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.520908   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.020638   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:50.854364   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.355074   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:51.855077   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.354509   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:52.855229   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.355204   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:53.854829   69234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:45:54.066909   69234 kubeadm.go:1113] duration metric: took 4.424595735s to wait for elevateKubeSystemPrivileges
	I0927 01:45:54.066954   69234 kubeadm.go:394] duration metric: took 4m55.454404762s to StartCluster
	I0927 01:45:54.066978   69234 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.067071   69234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:45:54.069732   69234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:45:54.070048   69234 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:45:54.070126   69234 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:45:54.070235   69234 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-245911"
	I0927 01:45:54.070257   69234 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-245911"
	I0927 01:45:54.070261   69234 addons.go:69] Setting default-storageclass=true in profile "embed-certs-245911"
	I0927 01:45:54.070270   69234 config.go:182] Loaded profile config "embed-certs-245911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:45:54.070270   69234 addons.go:69] Setting metrics-server=true in profile "embed-certs-245911"
	I0927 01:45:54.070286   69234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-245911"
	I0927 01:45:54.070296   69234 addons.go:234] Setting addon metrics-server=true in "embed-certs-245911"
	W0927 01:45:54.070305   69234 addons.go:243] addon metrics-server should already be in state true
	W0927 01:45:54.070266   69234 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070339   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.070750   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070790   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070753   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070850   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.070889   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.070936   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.071693   69234 out.go:177] * Verifying Kubernetes components...
	I0927 01:45:54.073034   69234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:45:54.087559   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38159
	I0927 01:45:54.087567   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0927 01:45:54.088061   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088074   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0927 01:45:54.088183   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088412   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.088551   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088573   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088635   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088655   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088852   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.088874   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.088929   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089023   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089131   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.089193   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.089585   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089610   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.089627   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.089639   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.092683   69234 addons.go:234] Setting addon default-storageclass=true in "embed-certs-245911"
	W0927 01:45:54.092705   69234 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:45:54.092729   69234 host.go:66] Checking if "embed-certs-245911" exists ...
	I0927 01:45:54.093065   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.093102   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.106496   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0927 01:45:54.106952   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.107486   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.107513   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.108098   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.108297   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.109993   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.110532   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0927 01:45:54.111066   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.111688   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.111708   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.111909   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0927 01:45:54.112156   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112338   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.112740   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.112751   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.112832   69234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:45:54.112953   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.112987   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.113345   69234 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:45:54.113372   69234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:45:54.114353   69234 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.114372   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:45:54.114392   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.114596   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.116175   69234 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:45:51.806801   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:53.808476   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:54.117315   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:45:54.117326   69234 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:45:54.117341   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.120242   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.120881   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.120903   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121161   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121224   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.121452   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.121658   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.121747   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.121944   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.121960   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.121677   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.122386   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.122518   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.122695   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.135920   69234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0927 01:45:54.136247   69234 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:45:54.136682   69234 main.go:141] libmachine: Using API Version  1
	I0927 01:45:54.136696   69234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:45:54.136971   69234 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:45:54.137163   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetState
	I0927 01:45:54.138640   69234 main.go:141] libmachine: (embed-certs-245911) Calling .DriverName
	I0927 01:45:54.138903   69234 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.138919   69234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:45:54.138936   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHHostname
	I0927 01:45:54.141420   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141786   69234 main.go:141] libmachine: (embed-certs-245911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:42:a3", ip: ""} in network mk-embed-certs-245911: {Iface:virbr4 ExpiryTime:2024-09-27 02:40:43 +0000 UTC Type:0 Mac:52:54:00:bd:42:a3 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:embed-certs-245911 Clientid:01:52:54:00:bd:42:a3}
	I0927 01:45:54.141803   69234 main.go:141] libmachine: (embed-certs-245911) DBG | domain embed-certs-245911 has defined IP address 192.168.39.158 and MAC address 52:54:00:bd:42:a3 in network mk-embed-certs-245911
	I0927 01:45:54.141966   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHPort
	I0927 01:45:54.142132   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHKeyPath
	I0927 01:45:54.142235   69234 main.go:141] libmachine: (embed-certs-245911) Calling .GetSSHUsername
	I0927 01:45:54.142308   69234 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/embed-certs-245911/id_rsa Username:docker}
	I0927 01:45:54.325790   69234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:45:54.375616   69234 node_ready.go:35] waiting up to 6m0s for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386626   69234 node_ready.go:49] node "embed-certs-245911" has status "Ready":"True"
	I0927 01:45:54.386646   69234 node_ready.go:38] duration metric: took 10.995073ms for node "embed-certs-245911" to be "Ready" ...
	I0927 01:45:54.386654   69234 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:45:54.394605   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:45:54.458245   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:45:54.501624   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:45:54.501655   69234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:45:54.508690   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:45:54.548168   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:45:54.548194   69234 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:45:54.615565   69234 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:54.615591   69234 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:45:54.655649   69234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:45:55.488749   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488849   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.488803   69234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030519069s)
	I0927 01:45:55.488934   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.488942   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489266   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489282   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489290   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489298   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489377   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489393   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489401   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.489409   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.489511   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.489528   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.489540   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491047   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.491082   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.491093   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.535220   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.535240   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.535604   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.535625   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.627642   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.627663   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628020   69234 main.go:141] libmachine: (embed-certs-245911) DBG | Closing plugin on server side
	I0927 01:45:55.628025   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628047   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628055   69234 main.go:141] libmachine: Making call to close driver server
	I0927 01:45:55.628062   69234 main.go:141] libmachine: (embed-certs-245911) Calling .Close
	I0927 01:45:55.628294   69234 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:45:55.628311   69234 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:45:55.628322   69234 addons.go:475] Verifying addon metrics-server=true in "embed-certs-245911"
	I0927 01:45:55.629802   69234 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0927 01:45:55.022054   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:57.520749   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:56.307903   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.807972   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:55.631245   69234 addons.go:510] duration metric: took 1.561128577s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0927 01:45:56.401813   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:58.900688   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:45:59.521353   69534 pod_ready.go:103] pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.014813   69534 pod_ready.go:82] duration metric: took 4m0.000584515s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:00.014858   69534 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n9nsg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0927 01:46:00.014878   69534 pod_ready.go:39] duration metric: took 4m13.043107791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:00.014903   69534 kubeadm.go:597] duration metric: took 4m20.409702758s to restartPrimaryControlPlane
	W0927 01:46:00.014956   69534 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 01:46:00.014980   69534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:46:00.808408   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.808672   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:00.901714   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:02.902242   69234 pod_ready.go:103] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:03.401910   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.401936   69234 pod_ready.go:82] duration metric: took 9.007296678s for pod "coredns-7c65d6cfc9-t4mxw" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.401948   69234 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908874   69234 pod_ready.go:93] pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.908896   69234 pod_ready.go:82] duration metric: took 506.941437ms for pod "coredns-7c65d6cfc9-zp5f2" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.908918   69234 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914117   69234 pod_ready.go:93] pod "etcd-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.914135   69234 pod_ready.go:82] duration metric: took 5.210078ms for pod "etcd-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.914142   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918778   69234 pod_ready.go:93] pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.918801   69234 pod_ready.go:82] duration metric: took 4.651828ms for pod "kube-apiserver-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.918812   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.923979   69234 pod_ready.go:93] pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:03.923996   69234 pod_ready.go:82] duration metric: took 5.176348ms for pod "kube-controller-manager-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:03.924004   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199586   69234 pod_ready.go:93] pod "kube-proxy-5l299" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.199612   69234 pod_ready.go:82] duration metric: took 275.601068ms for pod "kube-proxy-5l299" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.199621   69234 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598852   69234 pod_ready.go:93] pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:04.598880   69234 pod_ready.go:82] duration metric: took 399.251298ms for pod "kube-scheduler-embed-certs-245911" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:04.598890   69234 pod_ready.go:39] duration metric: took 10.212226661s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:04.598905   69234 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:04.598962   69234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:04.615194   69234 api_server.go:72] duration metric: took 10.545103977s to wait for apiserver process to appear ...
	I0927 01:46:04.615225   69234 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:04.615248   69234 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0927 01:46:04.621164   69234 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0927 01:46:04.622001   69234 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:04.622022   69234 api_server.go:131] duration metric: took 6.789717ms to wait for apiserver health ...
	I0927 01:46:04.622032   69234 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:04.802641   69234 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:04.802674   69234 system_pods.go:61] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:04.802681   69234 system_pods.go:61] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:04.802687   69234 system_pods.go:61] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:04.802693   69234 system_pods.go:61] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:04.802699   69234 system_pods.go:61] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:04.802703   69234 system_pods.go:61] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:04.802708   69234 system_pods.go:61] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:04.802717   69234 system_pods.go:61] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:04.802722   69234 system_pods.go:61] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:04.802735   69234 system_pods.go:74] duration metric: took 180.694209ms to wait for pod list to return data ...
	I0927 01:46:04.802747   69234 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:04.999578   69234 default_sa.go:45] found service account: "default"
	I0927 01:46:04.999603   69234 default_sa.go:55] duration metric: took 196.845725ms for default service account to be created ...
	I0927 01:46:04.999612   69234 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:05.201201   69234 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:05.201228   69234 system_pods.go:89] "coredns-7c65d6cfc9-t4mxw" [b3f9faa4-be80-40bf-9080-363fcbf3f084] Running
	I0927 01:46:05.201233   69234 system_pods.go:89] "coredns-7c65d6cfc9-zp5f2" [0829b4a4-1686-4f22-8368-65e3897604b0] Running
	I0927 01:46:05.201237   69234 system_pods.go:89] "etcd-embed-certs-245911" [8b1eb68b-4d88-4af3-a5df-3a6490d9d376] Running
	I0927 01:46:05.201241   69234 system_pods.go:89] "kube-apiserver-embed-certs-245911" [05ddc1b7-f7a9-4201-8d2e-2eb57d4e6731] Running
	I0927 01:46:05.201244   69234 system_pods.go:89] "kube-controller-manager-embed-certs-245911" [71c7cdfd-5e67-4876-9c00-31fff46c2b37] Running
	I0927 01:46:05.201248   69234 system_pods.go:89] "kube-proxy-5l299" [768ae3f5-2ebd-4db7-aa36-81c4f033d685] Running
	I0927 01:46:05.201251   69234 system_pods.go:89] "kube-scheduler-embed-certs-245911" [4111a186-de42-4004-bcdc-3e445142fca0] Running
	I0927 01:46:05.201256   69234 system_pods.go:89] "metrics-server-6867b74b74-k28wz" [1d369542-c088-4099-aa6f-9d3158f78f25] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:05.201260   69234 system_pods.go:89] "storage-provisioner" [0c48d125-370c-44a1-9ede-536881b40d57] Running
	I0927 01:46:05.201268   69234 system_pods.go:126] duration metric: took 201.651734ms to wait for k8s-apps to be running ...
	I0927 01:46:05.201275   69234 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:05.201315   69234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:05.216216   69234 system_svc.go:56] duration metric: took 14.930697ms WaitForService to wait for kubelet
	I0927 01:46:05.216248   69234 kubeadm.go:582] duration metric: took 11.146166369s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:05.216271   69234 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:05.400667   69234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:05.400695   69234 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:05.400708   69234 node_conditions.go:105] duration metric: took 184.432904ms to run NodePressure ...
	I0927 01:46:05.400719   69234 start.go:241] waiting for startup goroutines ...
	I0927 01:46:05.400729   69234 start.go:246] waiting for cluster config update ...
	I0927 01:46:05.400743   69234 start.go:255] writing updated cluster config ...
	I0927 01:46:05.401134   69234 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:05.452606   69234 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:05.454631   69234 out.go:177] * Done! kubectl is now configured to use "embed-certs-245911" cluster and "default" namespace by default
	I0927 01:46:05.307371   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.807981   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:07.393548   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:46:07.394304   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:07.394505   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:10.307311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.308085   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:14.308664   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:12.395176   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:12.395434   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:16.807116   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:18.807652   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:21.307348   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:23.807597   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:26.304067   69534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.289064717s)
	I0927 01:46:26.304150   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:26.341383   69534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 01:46:26.365985   69534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:46:26.382056   69534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:46:26.382082   69534 kubeadm.go:157] found existing configuration files:
	
	I0927 01:46:26.382133   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0927 01:46:26.405820   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:46:26.405881   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:46:26.416355   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0927 01:46:26.426710   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:46:26.426759   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:46:26.438110   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.448631   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:46:26.448691   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:46:26.458453   69534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0927 01:46:26.467677   69534 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:46:26.467724   69534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:46:26.478333   69534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:46:26.528377   69534 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 01:46:26.528432   69534 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:46:26.653799   69534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:46:26.653904   69534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:46:26.654029   69534 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 01:46:26.666791   69534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:46:22.395858   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:22.396073   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:26.668660   69534 out.go:235]   - Generating certificates and keys ...
	I0927 01:46:26.668739   69534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:46:26.668803   69534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:46:26.668918   69534 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:46:26.669012   69534 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:46:26.669103   69534 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:46:26.669178   69534 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:46:26.669308   69534 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:46:26.669628   69534 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:46:26.669868   69534 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:46:26.670086   69534 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:46:26.670284   69534 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:46:26.670395   69534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:46:26.885345   69534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:46:27.061416   69534 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 01:46:27.347409   69534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:46:27.477340   69534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:46:27.607326   69534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:46:27.607882   69534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:46:27.612459   69534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:46:27.614167   69534 out.go:235]   - Booting up control plane ...
	I0927 01:46:27.614285   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:46:27.614388   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:46:27.614482   69534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:46:27.635734   69534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:46:27.642550   69534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:46:27.642634   69534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:46:27.778616   69534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 01:46:27.778763   69534 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 01:46:28.280057   69534 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.328597ms
	I0927 01:46:28.280185   69534 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 01:46:25.808311   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:28.307033   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:33.781107   69534 kubeadm.go:310] [api-check] The API server is healthy after 5.501552407s
	I0927 01:46:33.796672   69534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 01:46:33.809900   69534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 01:46:33.845968   69534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 01:46:33.846194   69534 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-368295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 01:46:33.862294   69534 kubeadm.go:310] [bootstrap-token] Using token: qmzafx.lhyo0l65zryygr2x
	I0927 01:46:30.308436   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809032   68676 pod_ready.go:103] pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:32.809057   68676 pod_ready.go:82] duration metric: took 4m0.007962887s for pod "metrics-server-6867b74b74-cc9pp" in "kube-system" namespace to be "Ready" ...
	E0927 01:46:32.809066   68676 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:46:32.809075   68676 pod_ready.go:39] duration metric: took 4m5.043455674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:32.809088   68676 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:32.809115   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:32.809175   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:32.871610   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:32.871629   68676 cri.go:89] found id: ""
	I0927 01:46:32.871636   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:32.871682   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.878223   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:32.878296   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:32.925139   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:32.925173   68676 cri.go:89] found id: ""
	I0927 01:46:32.925182   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:32.925238   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.929961   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:32.930023   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:32.969777   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:32.969799   68676 cri.go:89] found id: ""
	I0927 01:46:32.969807   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:32.969854   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:32.979003   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:32.979088   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:33.029458   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:33.029532   68676 cri.go:89] found id: ""
	I0927 01:46:33.029546   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:33.029609   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.036703   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:33.036777   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:33.085041   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:33.085058   68676 cri.go:89] found id: ""
	I0927 01:46:33.085065   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:33.085125   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.090305   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:33.090372   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:33.136837   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.136857   68676 cri.go:89] found id: ""
	I0927 01:46:33.136865   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:33.136913   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.141483   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:33.141543   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:33.182913   68676 cri.go:89] found id: ""
	I0927 01:46:33.182939   68676 logs.go:276] 0 containers: []
	W0927 01:46:33.182950   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:33.182956   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:33.183002   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:33.237031   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:33.237055   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.237061   68676 cri.go:89] found id: ""
	I0927 01:46:33.237070   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:33.237121   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.241969   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:33.246733   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:33.246760   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:33.294096   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:33.294128   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:33.357981   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:33.358029   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:33.397465   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:33.397500   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:33.922831   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:33.922869   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:34.067117   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:34.067152   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:34.082191   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:34.082218   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:34.126416   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:34.126454   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:34.166714   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:34.166744   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:34.206601   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:34.206642   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:34.254352   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:34.254383   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:34.293318   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:34.293347   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:34.340365   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:34.340398   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:33.863782   69534 out.go:235]   - Configuring RBAC rules ...
	I0927 01:46:33.863922   69534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 01:46:33.871841   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 01:46:33.880047   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 01:46:33.884688   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 01:46:33.892057   69534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 01:46:33.895787   69534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 01:46:34.190553   69534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 01:46:34.619922   69534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 01:46:35.188452   69534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 01:46:35.189552   69534 kubeadm.go:310] 
	I0927 01:46:35.189661   69534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 01:46:35.189683   69534 kubeadm.go:310] 
	I0927 01:46:35.189791   69534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 01:46:35.189806   69534 kubeadm.go:310] 
	I0927 01:46:35.189845   69534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 01:46:35.189925   69534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 01:46:35.190002   69534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 01:46:35.190016   69534 kubeadm.go:310] 
	I0927 01:46:35.190095   69534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 01:46:35.190104   69534 kubeadm.go:310] 
	I0927 01:46:35.190181   69534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 01:46:35.190193   69534 kubeadm.go:310] 
	I0927 01:46:35.190264   69534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 01:46:35.190387   69534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 01:46:35.190484   69534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 01:46:35.190498   69534 kubeadm.go:310] 
	I0927 01:46:35.190593   69534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 01:46:35.190681   69534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 01:46:35.190691   69534 kubeadm.go:310] 
	I0927 01:46:35.190793   69534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.190948   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e \
	I0927 01:46:35.191002   69534 kubeadm.go:310] 	--control-plane 
	I0927 01:46:35.191021   69534 kubeadm.go:310] 
	I0927 01:46:35.191134   69534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 01:46:35.191155   69534 kubeadm.go:310] 
	I0927 01:46:35.191281   69534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qmzafx.lhyo0l65zryygr2x \
	I0927 01:46:35.191427   69534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2b64245b2533f2f6907f851e4fd61c8df67374e05cb5be25be73a61920f8e 
	I0927 01:46:35.192564   69534 kubeadm.go:310] W0927 01:46:26.480521    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.192905   69534 kubeadm.go:310] W0927 01:46:26.481198    2541 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 01:46:35.193078   69534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:46:35.193093   69534 cni.go:84] Creating CNI manager for ""
	I0927 01:46:35.193102   69534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 01:46:35.194656   69534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 01:46:35.195835   69534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 01:46:35.207162   69534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 01:46:35.225999   69534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-368295 minikube.k8s.io/updated_at=2024_09_27T01_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=default-k8s-diff-port-368295 minikube.k8s.io/primary=true
	I0927 01:46:35.226096   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.258203   69534 ops.go:34] apiserver oom_adj: -16
	I0927 01:46:35.425367   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:35.926435   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.425611   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:36.925505   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.426329   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:37.926184   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.425745   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:38.925572   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.425831   69534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 01:46:39.508783   69534 kubeadm.go:1113] duration metric: took 4.282764601s to wait for elevateKubeSystemPrivileges
	I0927 01:46:39.508817   69534 kubeadm.go:394] duration metric: took 4m59.95903234s to StartCluster
	I0927 01:46:39.508838   69534 settings.go:142] acquiring lock: {Name:mk5dca3ab86dd3a71947d9d84c3d32131258c6f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.508930   69534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:46:39.510771   69534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/kubeconfig: {Name:mke01ed683bdb96463571316956510763878395f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:46:39.511005   69534 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.83 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:46:39.511071   69534 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:46:39.511194   69534 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511214   69534 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511230   69534 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-368295"
	I0927 01:46:39.511261   69534 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511276   69534 addons.go:243] addon metrics-server should already be in state true
	I0927 01:46:39.511325   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511243   69534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-368295"
	I0927 01:46:39.511225   69534 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.511515   69534 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:46:39.511538   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.511223   69534 config.go:182] Loaded profile config "default-k8s-diff-port-368295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511818   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511844   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511772   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.511877   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.511905   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.513051   69534 out.go:177] * Verifying Kubernetes components...
	I0927 01:46:39.514530   69534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:46:39.528031   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0927 01:46:39.528033   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0927 01:46:39.528446   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528603   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.528997   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529022   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529085   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.529101   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.529210   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0927 01:46:39.529421   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.529721   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.529743   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.529724   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.530304   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.530358   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.530308   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.530423   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.530762   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.531337   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.531389   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.533286   69534 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-368295"
	W0927 01:46:39.533306   69534 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:46:39.533333   69534 host.go:66] Checking if "default-k8s-diff-port-368295" exists ...
	I0927 01:46:39.533656   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.533692   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.546657   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I0927 01:46:39.546881   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0927 01:46:39.547298   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547327   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.547842   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547860   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.547860   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.547876   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.548220   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548239   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.548435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.548481   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.550160   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.550445   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0927 01:46:39.550744   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.551173   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.551195   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.551525   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.552620   69534 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19711-14935/.minikube/bin/docker-machine-driver-kvm2
	I0927 01:46:39.552652   69534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:46:39.552838   69534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:46:39.552916   69534 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:46:36.914500   68676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:36.932340   68676 api_server.go:72] duration metric: took 4m14.883408931s to wait for apiserver process to appear ...
	I0927 01:46:36.932368   68676 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:36.932407   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:36.932465   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:36.967757   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:36.967780   68676 cri.go:89] found id: ""
	I0927 01:46:36.967787   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:36.967832   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:36.972025   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:36.972105   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:37.018403   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.018431   68676 cri.go:89] found id: ""
	I0927 01:46:37.018448   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:37.018515   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.022868   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:37.022925   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:37.062443   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.062466   68676 cri.go:89] found id: ""
	I0927 01:46:37.062474   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:37.062534   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.066617   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:37.066674   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:37.101462   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.101489   68676 cri.go:89] found id: ""
	I0927 01:46:37.101500   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:37.101557   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.105564   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:37.105620   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:37.143692   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.143719   68676 cri.go:89] found id: ""
	I0927 01:46:37.143729   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:37.143775   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.148405   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:37.148484   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:37.184914   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.184943   68676 cri.go:89] found id: ""
	I0927 01:46:37.184954   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:37.185013   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.189486   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:37.189553   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:37.235389   68676 cri.go:89] found id: ""
	I0927 01:46:37.235416   68676 logs.go:276] 0 containers: []
	W0927 01:46:37.235424   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:37.235429   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:37.235480   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:37.276239   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.276266   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.276272   68676 cri.go:89] found id: ""
	I0927 01:46:37.276282   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:37.276338   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.280381   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:37.284423   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:37.284440   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:37.319790   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:37.319816   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:37.358818   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:37.358843   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:37.398137   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:37.398168   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:37.458672   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:37.458720   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:37.476148   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:37.476184   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:37.604190   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:37.604223   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:37.652633   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:37.652671   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:37.701240   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:37.701273   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:37.739555   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:37.739583   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:37.781721   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:37.781750   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:38.209361   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:38.209399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:38.261628   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:38.261658   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:39.554328   69534 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:39.554342   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:46:39.554362   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.554446   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:46:39.554456   69534 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:46:39.554469   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.557886   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.557982   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558093   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558121   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558269   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558350   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.558369   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.558466   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558620   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.558690   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.558740   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.558797   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.559026   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.559136   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.569570   69534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0927 01:46:39.569981   69534 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:46:39.570364   69534 main.go:141] libmachine: Using API Version  1
	I0927 01:46:39.570383   69534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:46:39.570746   69534 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:46:39.570890   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetState
	I0927 01:46:39.572537   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .DriverName
	I0927 01:46:39.572779   69534 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.572795   69534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:46:39.572815   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHHostname
	I0927 01:46:39.575104   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575384   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b6:7a", ip: ""} in network mk-default-k8s-diff-port-368295: {Iface:virbr3 ExpiryTime:2024-09-27 02:41:25 +0000 UTC Type:0 Mac:52:54:00:a3:b6:7a Iaid: IPaddr:192.168.61.83 Prefix:24 Hostname:default-k8s-diff-port-368295 Clientid:01:52:54:00:a3:b6:7a}
	I0927 01:46:39.575435   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | domain default-k8s-diff-port-368295 has defined IP address 192.168.61.83 and MAC address 52:54:00:a3:b6:7a in network mk-default-k8s-diff-port-368295
	I0927 01:46:39.575595   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHPort
	I0927 01:46:39.575751   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHKeyPath
	I0927 01:46:39.575844   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .GetSSHUsername
	I0927 01:46:39.575960   69534 sshutil.go:53] new ssh client: &{IP:192.168.61.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/default-k8s-diff-port-368295/id_rsa Username:docker}
	I0927 01:46:39.784965   69534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:46:39.820986   69534 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829323   69534 node_ready.go:49] node "default-k8s-diff-port-368295" has status "Ready":"True"
	I0927 01:46:39.829346   69534 node_ready.go:38] duration metric: took 8.333848ms for node "default-k8s-diff-port-368295" to be "Ready" ...
	I0927 01:46:39.829358   69534 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:39.836143   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:39.940697   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:46:39.955239   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:46:39.955264   69534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:46:40.076199   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:46:40.080720   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:46:40.080746   69534 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:46:40.182698   69534 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.182720   69534 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:46:40.219231   69534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:46:40.431480   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431505   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.431859   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.431875   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.431875   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.431889   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.431898   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.432126   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.432146   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:40.432189   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442440   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:40.442468   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:40.442761   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:40.442785   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:40.442815   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.044597   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.044627   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.044964   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.045013   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045021   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.045033   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.045041   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.045254   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.045267   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.427791   69534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.208520131s)
	I0927 01:46:41.427843   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.427859   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428175   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) DBG | Closing plugin on server side
	I0927 01:46:41.428184   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428196   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428205   69534 main.go:141] libmachine: Making call to close driver server
	I0927 01:46:41.428213   69534 main.go:141] libmachine: (default-k8s-diff-port-368295) Calling .Close
	I0927 01:46:41.428477   69534 main.go:141] libmachine: Successfully made call to close driver server
	I0927 01:46:41.428490   69534 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 01:46:41.428500   69534 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-368295"
	I0927 01:46:41.430399   69534 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0927 01:46:41.431795   69534 addons.go:510] duration metric: took 1.920729429s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0927 01:46:41.844911   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:40.832698   68676 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0927 01:46:40.838244   68676 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0927 01:46:40.839252   68676 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:40.839270   68676 api_server.go:131] duration metric: took 3.906895557s to wait for apiserver health ...
	I0927 01:46:40.839277   68676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:40.839312   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:46:40.839373   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:46:40.879726   68676 cri.go:89] found id: "d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:40.879753   68676 cri.go:89] found id: ""
	I0927 01:46:40.879763   68676 logs.go:276] 1 containers: [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef]
	I0927 01:46:40.879822   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.884233   68676 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:46:40.884301   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:46:40.936189   68676 cri.go:89] found id: "703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:40.936216   68676 cri.go:89] found id: ""
	I0927 01:46:40.936226   68676 logs.go:276] 1 containers: [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0]
	I0927 01:46:40.936289   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.940805   68676 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:46:40.940885   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:46:40.978662   68676 cri.go:89] found id: "5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:40.978683   68676 cri.go:89] found id: ""
	I0927 01:46:40.978693   68676 logs.go:276] 1 containers: [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0]
	I0927 01:46:40.978757   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:40.983357   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:46:40.983428   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:46:41.027134   68676 cri.go:89] found id: "22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.027160   68676 cri.go:89] found id: ""
	I0927 01:46:41.027170   68676 logs.go:276] 1 containers: [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05]
	I0927 01:46:41.027229   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.031909   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:46:41.031986   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:46:41.077539   68676 cri.go:89] found id: "d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.077568   68676 cri.go:89] found id: ""
	I0927 01:46:41.077577   68676 logs.go:276] 1 containers: [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f]
	I0927 01:46:41.077638   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.082237   68676 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:46:41.082314   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:46:41.122413   68676 cri.go:89] found id: "56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:41.122437   68676 cri.go:89] found id: ""
	I0927 01:46:41.122446   68676 logs.go:276] 1 containers: [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647]
	I0927 01:46:41.122501   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.127807   68676 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:46:41.127872   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:46:41.174287   68676 cri.go:89] found id: ""
	I0927 01:46:41.174320   68676 logs.go:276] 0 containers: []
	W0927 01:46:41.174331   68676 logs.go:278] No container was found matching "kindnet"
	I0927 01:46:41.174339   68676 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:46:41.174397   68676 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:46:41.213192   68676 cri.go:89] found id: "8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.213219   68676 cri.go:89] found id: "074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:41.213225   68676 cri.go:89] found id: ""
	I0927 01:46:41.213234   68676 logs.go:276] 2 containers: [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c]
	I0927 01:46:41.213298   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.218168   68676 ssh_runner.go:195] Run: which crictl
	I0927 01:46:41.227165   68676 logs.go:123] Gathering logs for storage-provisioner [8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f] ...
	I0927 01:46:41.227194   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b91015e1bfce080e56cb8b43fb85113f4a04711d40aefdb94186e4ac3b51f8f"
	I0927 01:46:41.269538   68676 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:46:41.269571   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:46:41.691900   68676 logs.go:123] Gathering logs for dmesg ...
	I0927 01:46:41.691943   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:46:41.709639   68676 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:46:41.709682   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:46:41.829334   68676 logs.go:123] Gathering logs for etcd [703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0] ...
	I0927 01:46:41.829366   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 703936dc7e81fe6f2b4844c9eee4e823886a461824a5e6e718b7d10dd688cdf0"
	I0927 01:46:41.886517   68676 logs.go:123] Gathering logs for kube-scheduler [22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05] ...
	I0927 01:46:41.886552   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22e50606ae3281c94c9c42d25bb5608948dceea56d0745f1b762b3fdc19eea05"
	I0927 01:46:41.933012   68676 logs.go:123] Gathering logs for kube-proxy [d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f] ...
	I0927 01:46:41.933035   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d44b4389046f9e036faa09df244b908b80bd30919d46eb0ce9221a3a8d204d1f"
	I0927 01:46:41.973881   68676 logs.go:123] Gathering logs for kube-controller-manager [56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647] ...
	I0927 01:46:41.973921   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56ed48053950be7abe3fc4de07167e30254a8d42802a4283468b2168c2c7d647"
	I0927 01:46:42.032592   68676 logs.go:123] Gathering logs for container status ...
	I0927 01:46:42.032628   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:46:42.087817   68676 logs.go:123] Gathering logs for kubelet ...
	I0927 01:46:42.087856   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:46:42.162770   68676 logs.go:123] Gathering logs for kube-apiserver [d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef] ...
	I0927 01:46:42.162808   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5488a6ee0ac85c8a4173cd0d2387cb9c1ded0ab2fb4b96ec3b7ba425ffc81ef"
	I0927 01:46:42.213367   68676 logs.go:123] Gathering logs for coredns [5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0] ...
	I0927 01:46:42.213399   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a757b127a9ab9aac8b16f013f5b833a2cf1cc419077e65ce6a8a7161e63f8f0"
	I0927 01:46:42.254937   68676 logs.go:123] Gathering logs for storage-provisioner [074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c] ...
	I0927 01:46:42.254963   68676 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 074b4636352f07e779d3cb01f1d2a5c3b34fbe758dcbcd361e794a31eb371b4c"
	I0927 01:46:44.804112   68676 system_pods.go:59] 8 kube-system pods found
	I0927 01:46:44.804146   68676 system_pods.go:61] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.804153   68676 system_pods.go:61] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.804158   68676 system_pods.go:61] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.804162   68676 system_pods.go:61] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.804167   68676 system_pods.go:61] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.804171   68676 system_pods.go:61] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.804180   68676 system_pods.go:61] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.804186   68676 system_pods.go:61] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.804196   68676 system_pods.go:74] duration metric: took 3.964911623s to wait for pod list to return data ...
	I0927 01:46:44.804208   68676 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:44.807883   68676 default_sa.go:45] found service account: "default"
	I0927 01:46:44.807907   68676 default_sa.go:55] duration metric: took 3.689984ms for default service account to be created ...
	I0927 01:46:44.807917   68676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:44.812135   68676 system_pods.go:86] 8 kube-system pods found
	I0927 01:46:44.812161   68676 system_pods.go:89] "coredns-7c65d6cfc9-7q54t" [f320e945-a1d6-4109-a0cc-5bd4e3c1bfba] Running
	I0927 01:46:44.812167   68676 system_pods.go:89] "etcd-no-preload-521072" [6c63ce89-47bf-4d67-b5db-273a046c4b51] Running
	I0927 01:46:44.812174   68676 system_pods.go:89] "kube-apiserver-no-preload-521072" [e4804d4b-0532-46c7-8579-a829a6c5254c] Running
	I0927 01:46:44.812178   68676 system_pods.go:89] "kube-controller-manager-no-preload-521072" [5029e53b-ae24-41fb-aa58-14faf0440adb] Running
	I0927 01:46:44.812185   68676 system_pods.go:89] "kube-proxy-wkcb8" [ea79339c-b2f0-4cb8-ab57-4f13f689f504] Running
	I0927 01:46:44.812190   68676 system_pods.go:89] "kube-scheduler-no-preload-521072" [b70fd9f0-c131-4c13-b53f-46c650a5dcf8] Running
	I0927 01:46:44.812200   68676 system_pods.go:89] "metrics-server-6867b74b74-cc9pp" [a840ca52-d2b8-47a5-b379-30504658e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:44.812209   68676 system_pods.go:89] "storage-provisioner" [b4595dc3-c439-4615-95b7-2009476c779c] Running
	I0927 01:46:44.812222   68676 system_pods.go:126] duration metric: took 4.297317ms to wait for k8s-apps to be running ...
	I0927 01:46:44.812234   68676 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:44.812282   68676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:44.827911   68676 system_svc.go:56] duration metric: took 15.668154ms WaitForService to wait for kubelet
	I0927 01:46:44.827946   68676 kubeadm.go:582] duration metric: took 4m22.779012486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:44.827964   68676 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:44.830688   68676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:44.830707   68676 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:44.830716   68676 node_conditions.go:105] duration metric: took 2.747178ms to run NodePressure ...
	I0927 01:46:44.830725   68676 start.go:241] waiting for startup goroutines ...
	I0927 01:46:44.830732   68676 start.go:246] waiting for cluster config update ...
	I0927 01:46:44.830742   68676 start.go:255] writing updated cluster config ...
	I0927 01:46:44.830990   68676 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:44.881491   68676 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:44.884307   68676 out.go:177] * Done! kubectl is now configured to use "no-preload-521072" cluster and "default" namespace by default
	I0927 01:46:42.397038   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:46:42.397331   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:46:43.845539   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:46.343584   69534 pod_ready.go:103] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"False"
	I0927 01:46:48.842505   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.842527   69534 pod_ready.go:82] duration metric: took 9.006354643s for pod "coredns-7c65d6cfc9-4d7pk" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.842537   69534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846753   69534 pod_ready.go:93] pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.846771   69534 pod_ready.go:82] duration metric: took 4.228349ms for pod "coredns-7c65d6cfc9-qkbzv" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.846780   69534 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851234   69534 pod_ready.go:93] pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.851256   69534 pod_ready.go:82] duration metric: took 4.468727ms for pod "etcd-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.851265   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855648   69534 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.855669   69534 pod_ready.go:82] duration metric: took 4.398439ms for pod "kube-apiserver-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.855678   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860882   69534 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:48.860898   69534 pod_ready.go:82] duration metric: took 5.214278ms for pod "kube-controller-manager-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:48.860906   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241149   69534 pod_ready.go:93] pod "kube-proxy-kqjdq" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.241180   69534 pod_ready.go:82] duration metric: took 380.266777ms for pod "kube-proxy-kqjdq" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.241192   69534 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642403   69534 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace has status "Ready":"True"
	I0927 01:46:49.642437   69534 pod_ready.go:82] duration metric: took 401.235952ms for pod "kube-scheduler-default-k8s-diff-port-368295" in "kube-system" namespace to be "Ready" ...
	I0927 01:46:49.642448   69534 pod_ready.go:39] duration metric: took 9.813073515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:46:49.642465   69534 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:46:49.642518   69534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:46:49.658847   69534 api_server.go:72] duration metric: took 10.147811957s to wait for apiserver process to appear ...
	I0927 01:46:49.658877   69534 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:46:49.658898   69534 api_server.go:253] Checking apiserver healthz at https://192.168.61.83:8444/healthz ...
	I0927 01:46:49.665899   69534 api_server.go:279] https://192.168.61.83:8444/healthz returned 200:
	ok
	I0927 01:46:49.666844   69534 api_server.go:141] control plane version: v1.31.1
	I0927 01:46:49.666867   69534 api_server.go:131] duration metric: took 7.982491ms to wait for apiserver health ...
	I0927 01:46:49.666876   69534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:46:49.843377   69534 system_pods.go:59] 9 kube-system pods found
	I0927 01:46:49.843402   69534 system_pods.go:61] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:49.843408   69534 system_pods.go:61] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:49.843413   69534 system_pods.go:61] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:49.843417   69534 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:49.843420   69534 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:49.843425   69534 system_pods.go:61] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:49.843429   69534 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:49.843437   69534 system_pods.go:61] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:49.843443   69534 system_pods.go:61] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:49.843454   69534 system_pods.go:74] duration metric: took 176.572041ms to wait for pod list to return data ...
	I0927 01:46:49.843466   69534 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:46:50.041031   69534 default_sa.go:45] found service account: "default"
	I0927 01:46:50.041053   69534 default_sa.go:55] duration metric: took 197.577565ms for default service account to be created ...
	I0927 01:46:50.041062   69534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:46:50.243807   69534 system_pods.go:86] 9 kube-system pods found
	I0927 01:46:50.243834   69534 system_pods.go:89] "coredns-7c65d6cfc9-4d7pk" [c84ab26c-2e13-437c-b059-43c8ca1d90c8] Running
	I0927 01:46:50.243840   69534 system_pods.go:89] "coredns-7c65d6cfc9-qkbzv" [e2725448-3f80-45d8-8bd8-49dcf8878f7e] Running
	I0927 01:46:50.243845   69534 system_pods.go:89] "etcd-default-k8s-diff-port-368295" [cf24c93c-bcff-4ffc-b7b2-8e69c070cf92] Running
	I0927 01:46:50.243849   69534 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-368295" [7cb4e15c-d20c-4f93-bf12-d2407edcc877] Running
	I0927 01:46:50.243853   69534 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-368295" [52bc69db-f7b9-40a2-9930-1b3bd321fecf] Running
	I0927 01:46:50.243856   69534 system_pods.go:89] "kube-proxy-kqjdq" [91b96945-0ffe-404f-a0d5-f8729d4248ce] Running
	I0927 01:46:50.243860   69534 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-368295" [bc16cdb1-6e5c-4d19-ab43-cd378a65184d] Running
	I0927 01:46:50.243866   69534 system_pods.go:89] "metrics-server-6867b74b74-d85zg" [579ae063-049c-423c-8f91-91fb4b32e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:46:50.243869   69534 system_pods.go:89] "storage-provisioner" [aaa7a054-2eee-45ee-a9bc-c305e53e1273] Running
	I0927 01:46:50.243879   69534 system_pods.go:126] duration metric: took 202.812704ms to wait for k8s-apps to be running ...
	I0927 01:46:50.243888   69534 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:46:50.243931   69534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:46:50.260175   69534 system_svc.go:56] duration metric: took 16.279433ms WaitForService to wait for kubelet
	I0927 01:46:50.260203   69534 kubeadm.go:582] duration metric: took 10.749173466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:46:50.260220   69534 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:46:50.441020   69534 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 01:46:50.441044   69534 node_conditions.go:123] node cpu capacity is 2
	I0927 01:46:50.441052   69534 node_conditions.go:105] duration metric: took 180.827321ms to run NodePressure ...
	I0927 01:46:50.441062   69534 start.go:241] waiting for startup goroutines ...
	I0927 01:46:50.441081   69534 start.go:246] waiting for cluster config update ...
	I0927 01:46:50.441091   69534 start.go:255] writing updated cluster config ...
	I0927 01:46:50.441338   69534 ssh_runner.go:195] Run: rm -f paused
	I0927 01:46:50.492229   69534 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:46:50.494198   69534 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-368295" cluster and "default" namespace by default
	I0927 01:47:22.398756   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:47:22.399035   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:47:22.399051   69333 kubeadm.go:310] 
	I0927 01:47:22.399125   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:47:22.399167   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:47:22.399176   69333 kubeadm.go:310] 
	I0927 01:47:22.399242   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:47:22.399326   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:47:22.399452   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:47:22.399464   69333 kubeadm.go:310] 
	I0927 01:47:22.399627   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:47:22.399702   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:47:22.399750   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:47:22.399763   69333 kubeadm.go:310] 
	I0927 01:47:22.399908   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:47:22.400001   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:47:22.400014   69333 kubeadm.go:310] 
	I0927 01:47:22.400109   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:47:22.400218   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:47:22.400331   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:47:22.400406   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:47:22.400414   69333 kubeadm.go:310] 
	I0927 01:47:22.401157   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:47:22.401273   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:47:22.401342   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 01:47:22.401458   69333 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 01:47:22.401498   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 01:47:22.863316   69333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:47:22.878664   69333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 01:47:22.889118   69333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 01:47:22.889135   69333 kubeadm.go:157] found existing configuration files:
	
	I0927 01:47:22.889173   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 01:47:22.898966   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 01:47:22.899035   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 01:47:22.911280   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 01:47:22.920628   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 01:47:22.920677   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 01:47:22.929860   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.938794   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 01:47:22.938839   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 01:47:22.947982   69333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 01:47:22.956785   69333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 01:47:22.956837   69333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 01:47:22.966186   69333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 01:47:23.039915   69333 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 01:47:23.040017   69333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 01:47:23.189097   69333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 01:47:23.189274   69333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 01:47:23.189395   69333 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 01:47:23.400731   69333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 01:47:23.402659   69333 out.go:235]   - Generating certificates and keys ...
	I0927 01:47:23.402776   69333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 01:47:23.402855   69333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 01:47:23.402959   69333 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 01:47:23.403040   69333 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 01:47:23.403162   69333 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 01:47:23.403349   69333 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 01:47:23.403935   69333 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 01:47:23.404260   69333 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 01:47:23.404563   69333 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 01:47:23.404896   69333 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 01:47:23.405050   69333 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 01:47:23.405121   69333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 01:47:23.466908   69333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 01:47:23.717009   69333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 01:47:23.766225   69333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 01:47:23.961488   69333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 01:47:23.987846   69333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 01:47:23.988724   69333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 01:47:23.988790   69333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 01:47:24.130550   69333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 01:47:24.132276   69333 out.go:235]   - Booting up control plane ...
	I0927 01:47:24.132386   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 01:47:24.146415   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 01:47:24.147664   69333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 01:47:24.148443   69333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 01:47:24.151623   69333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 01:48:04.153587   69333 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 01:48:04.153934   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:04.154129   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:09.154634   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:09.154883   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:19.155638   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:19.155844   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:48:39.156224   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:48:39.156429   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155507   69333 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 01:49:19.155754   69333 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 01:49:19.155779   69333 kubeadm.go:310] 
	I0927 01:49:19.155872   69333 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 01:49:19.155947   69333 kubeadm.go:310] 		timed out waiting for the condition
	I0927 01:49:19.155958   69333 kubeadm.go:310] 
	I0927 01:49:19.156026   69333 kubeadm.go:310] 	This error is likely caused by:
	I0927 01:49:19.156077   69333 kubeadm.go:310] 		- The kubelet is not running
	I0927 01:49:19.156230   69333 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 01:49:19.156242   69333 kubeadm.go:310] 
	I0927 01:49:19.156379   69333 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 01:49:19.156434   69333 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 01:49:19.156486   69333 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 01:49:19.156506   69333 kubeadm.go:310] 
	I0927 01:49:19.156628   69333 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 01:49:19.156756   69333 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 01:49:19.156775   69333 kubeadm.go:310] 
	I0927 01:49:19.156925   69333 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 01:49:19.157022   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 01:49:19.157112   69333 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 01:49:19.157191   69333 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 01:49:19.157202   69333 kubeadm.go:310] 
	I0927 01:49:19.158023   69333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 01:49:19.158149   69333 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 01:49:19.158277   69333 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 01:49:19.158357   69333 kubeadm.go:394] duration metric: took 7m56.829434682s to StartCluster
	I0927 01:49:19.158404   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:49:19.158477   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:49:19.200705   69333 cri.go:89] found id: ""
	I0927 01:49:19.200729   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.200736   69333 logs.go:278] No container was found matching "kube-apiserver"
	I0927 01:49:19.200742   69333 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:49:19.200791   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:49:19.240252   69333 cri.go:89] found id: ""
	I0927 01:49:19.240274   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.240285   69333 logs.go:278] No container was found matching "etcd"
	I0927 01:49:19.240292   69333 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:49:19.240352   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:49:19.275802   69333 cri.go:89] found id: ""
	I0927 01:49:19.275826   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.275834   69333 logs.go:278] No container was found matching "coredns"
	I0927 01:49:19.275840   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:49:19.275894   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:49:19.309317   69333 cri.go:89] found id: ""
	I0927 01:49:19.309342   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.309350   69333 logs.go:278] No container was found matching "kube-scheduler"
	I0927 01:49:19.309357   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:49:19.309414   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:49:19.344778   69333 cri.go:89] found id: ""
	I0927 01:49:19.344806   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.344817   69333 logs.go:278] No container was found matching "kube-proxy"
	I0927 01:49:19.344823   69333 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:49:19.344882   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:49:19.379394   69333 cri.go:89] found id: ""
	I0927 01:49:19.379426   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.379438   69333 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 01:49:19.379445   69333 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:49:19.379502   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:49:19.415349   69333 cri.go:89] found id: ""
	I0927 01:49:19.415376   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.415384   69333 logs.go:278] No container was found matching "kindnet"
	I0927 01:49:19.415390   69333 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:49:19.415438   69333 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:49:19.453357   69333 cri.go:89] found id: ""
	I0927 01:49:19.453381   69333 logs.go:276] 0 containers: []
	W0927 01:49:19.453389   69333 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0927 01:49:19.453397   69333 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:49:19.453409   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 01:49:19.530384   69333 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 01:49:19.530405   69333 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:49:19.530423   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:49:19.643418   69333 logs.go:123] Gathering logs for container status ...
	I0927 01:49:19.643453   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:49:19.688825   69333 logs.go:123] Gathering logs for kubelet ...
	I0927 01:49:19.688861   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 01:49:19.745945   69333 logs.go:123] Gathering logs for dmesg ...
	I0927 01:49:19.745983   69333 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0927 01:49:19.762685   69333 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 01:49:19.762739   69333 out.go:270] * 
	W0927 01:49:19.762791   69333 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.762804   69333 out.go:270] * 
	W0927 01:49:19.763605   69333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:49:19.767393   69333 out.go:201] 
	W0927 01:49:19.768622   69333 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 01:49:19.768671   69333 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 01:49:19.768690   69333 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 01:49:19.771036   69333 out.go:201] 
	
	
	==> CRI-O <==
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.503044223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402465503021485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=224c83d5-3c6c-4dbe-97fd-c312efd0e4f6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.503694534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8940e6f0-fc55-44a9-8705-cc31279d7161 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.503742455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8940e6f0-fc55-44a9-8705-cc31279d7161 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.503849222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8940e6f0-fc55-44a9-8705-cc31279d7161 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.537432815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd5beb77-3081-4abb-857b-e913c30fc96a name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.537507070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd5beb77-3081-4abb-857b-e913c30fc96a name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.538648065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cc80d33-071b-4e0c-99ac-a67d0629920c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.539117268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402465539090419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cc80d33-071b-4e0c-99ac-a67d0629920c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.539591838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2780e59e-81b9-4a1a-a43a-ccdcac61fb85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.539644337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2780e59e-81b9-4a1a-a43a-ccdcac61fb85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.539678335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2780e59e-81b9-4a1a-a43a-ccdcac61fb85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.574930065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c399ff0-61cd-4dad-a125-507ce217453b name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.575027606Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c399ff0-61cd-4dad-a125-507ce217453b name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.576261221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3cce719-56ce-41f5-bd19-4937fabe0709 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.576632511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402465576611763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3cce719-56ce-41f5-bd19-4937fabe0709 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.577206551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e737844-0ce9-438d-a7fb-90e83051229f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.577254451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e737844-0ce9-438d-a7fb-90e83051229f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.577284201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0e737844-0ce9-438d-a7fb-90e83051229f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.609028006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9060b7c-b2cc-4f48-89c7-59faf3ef308d name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.609123690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9060b7c-b2cc-4f48-89c7-59faf3ef308d name=/runtime.v1.RuntimeService/Version
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.610308468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8d3fba7-ae99-4b77-a834-cb04f04b55e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.610947038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727402465610908465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8d3fba7-ae99-4b77-a834-cb04f04b55e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.611391759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65c9bfc3-287b-4b94-b1cb-a7d9f8fdea39 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.611471360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65c9bfc3-287b-4b94-b1cb-a7d9f8fdea39 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 02:01:05 old-k8s-version-612261 crio[628]: time="2024-09-27 02:01:05.611505830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=65c9bfc3-287b-4b94-b1cb-a7d9f8fdea39 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep27 01:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051380] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040023] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep27 01:41] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.490738] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.637888] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.070410] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.081325] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.210782] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.144654] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.262711] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.839165] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064025] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.828367] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +11.175171] kauditd_printk_skb: 46 callbacks suppressed
	[Sep27 01:45] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Sep27 01:47] systemd-fstab-generator[5347]: Ignoring "noauto" option for root device
	[  +0.069319] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:01:05 up 20 min,  0 users,  load average: 0.00, 0.00, 0.03
	Linux old-k8s-version-612261 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]: net.(*sysDialer).dialSerial(0xc000900380, 0x4f7fe40, 0xc00095e1e0, 0xc0008bb490, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]:         /usr/local/go/src/net/dial.go:548 +0x152
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]: net.(*Dialer).DialContext(0xc0001a6420, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0008e0cf0, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0003d0e60, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0008e0cf0, 0x24, 0x60, 0x7f50d1985fe8, 0x118, ...)
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]: net/http.(*Transport).dial(0xc0009f6000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0008e0cf0, 0x24, 0x0, 0xc00095dc68, 0x7e4acc, ...)
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]: net/http.(*Transport).dialConn(0xc0009f6000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000bd63c0, 0x5, 0xc0008e0cf0, 0x24, 0x0, 0xc000926480, ...)
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]: net/http.(*Transport).dialConnFor(0xc0009f6000, 0xc000726580)
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]: created by net/http.(*Transport).queueForDial
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6874]: E0927 02:01:01.019968    6874 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dold-k8s-version-612261&limit=500&resourceVersion=0": dial tcp 192.168.72.129:8443: connect: connection refused
	Sep 27 02:01:01 old-k8s-version-612261 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 27 02:01:01 old-k8s-version-612261 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 27 02:01:01 old-k8s-version-612261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 142.
	Sep 27 02:01:01 old-k8s-version-612261 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 27 02:01:01 old-k8s-version-612261 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6883]: I0927 02:01:01.748498    6883 server.go:416] Version: v1.20.0
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6883]: I0927 02:01:01.748761    6883 server.go:837] Client rotation is on, will bootstrap in background
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6883]: I0927 02:01:01.750761    6883 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6883]: W0927 02:01:01.751705    6883 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 27 02:01:01 old-k8s-version-612261 kubelet[6883]: I0927 02:01:01.751975    6883 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 2 (219.242475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-612261" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (160.35s)

                                                
                                    

Test pass (250/317)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 40.77
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 18.4
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 90.46
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 136.38
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 10.81
38 TestAddons/parallel/CSI 62.94
39 TestAddons/parallel/Headlamp 23.83
40 TestAddons/parallel/CloudSpanner 5.53
41 TestAddons/parallel/LocalPath 55.07
42 TestAddons/parallel/NvidiaDevicePlugin 5.51
43 TestAddons/parallel/Yakd 10.69
44 TestAddons/StoppedEnableDisable 7.54
45 TestCertOptions 75.82
46 TestCertExpiration 393.86
48 TestForceSystemdFlag 55.8
49 TestForceSystemdEnv 92.06
51 TestKVMDriverInstallOrUpdate 5.07
55 TestErrorSpam/setup 42.2
56 TestErrorSpam/start 0.32
57 TestErrorSpam/status 0.72
58 TestErrorSpam/pause 1.54
59 TestErrorSpam/unpause 1.67
60 TestErrorSpam/stop 4.8
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 83.04
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 33.03
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
72 TestFunctional/serial/CacheCmd/cache/add_local 2.25
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 32.99
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.5
83 TestFunctional/serial/LogsFileCmd 1.5
84 TestFunctional/serial/InvalidService 4.58
86 TestFunctional/parallel/ConfigCmd 0.3
87 TestFunctional/parallel/DashboardCmd 18.28
88 TestFunctional/parallel/DryRun 0.28
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1.14
94 TestFunctional/parallel/ServiceCmdConnect 8.57
95 TestFunctional/parallel/AddonsCmd 0.11
96 TestFunctional/parallel/PersistentVolumeClaim 47.47
98 TestFunctional/parallel/SSHCmd 0.39
99 TestFunctional/parallel/CpCmd 1.3
100 TestFunctional/parallel/MySQL 26.71
101 TestFunctional/parallel/FileSync 0.25
102 TestFunctional/parallel/CertSync 1.61
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
110 TestFunctional/parallel/License 0.68
111 TestFunctional/parallel/ServiceCmd/DeployApp 11.18
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
113 TestFunctional/parallel/ProfileCmd/profile_list 0.32
114 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
115 TestFunctional/parallel/MountCmd/any-port 10.62
116 TestFunctional/parallel/ServiceCmd/List 0.47
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
118 TestFunctional/parallel/MountCmd/specific-port 1.79
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
120 TestFunctional/parallel/ServiceCmd/Format 0.26
121 TestFunctional/parallel/ServiceCmd/URL 0.29
122 TestFunctional/parallel/MountCmd/VerifyCleanup 0.84
123 TestFunctional/parallel/Version/short 0.04
124 TestFunctional/parallel/Version/components 0.77
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.48
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 11.56
130 TestFunctional/parallel/ImageCommands/Setup 2.65
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.24
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
150 TestFunctional/delete_echo-server_images 0.03
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 197.07
157 TestMultiControlPlane/serial/DeployApp 7.31
158 TestMultiControlPlane/serial/PingHostFromPods 1.18
159 TestMultiControlPlane/serial/AddWorkerNode 56.46
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
162 TestMultiControlPlane/serial/CopyFile 12.47
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.17
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.23
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
171 TestMultiControlPlane/serial/RestartCluster 343.31
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
173 TestMultiControlPlane/serial/AddSecondaryNode 79.62
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
178 TestJSONOutput/start/Command 81.46
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.68
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.61
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.35
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.18
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 87.55
210 TestMountStart/serial/StartWithMountFirst 32.11
211 TestMountStart/serial/VerifyMountFirst 0.4
212 TestMountStart/serial/StartWithMountSecond 28.84
213 TestMountStart/serial/VerifyMountSecond 0.35
214 TestMountStart/serial/DeleteFirst 0.72
215 TestMountStart/serial/VerifyMountPostDelete 0.35
216 TestMountStart/serial/Stop 1.26
217 TestMountStart/serial/RestartStopped 23.35
218 TestMountStart/serial/VerifyMountPostStop 0.36
221 TestMultiNode/serial/FreshStart2Nodes 115.3
222 TestMultiNode/serial/DeployApp2Nodes 6.38
223 TestMultiNode/serial/PingHostFrom2Pods 0.78
224 TestMultiNode/serial/AddNode 50.18
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.55
227 TestMultiNode/serial/CopyFile 6.9
228 TestMultiNode/serial/StopNode 2.26
229 TestMultiNode/serial/StartAfterStop 39.53
231 TestMultiNode/serial/DeleteNode 2.27
233 TestMultiNode/serial/RestartMultiNode 179.29
234 TestMultiNode/serial/ValidateNameConflict 44.18
241 TestScheduledStopUnix 118.23
245 TestRunningBinaryUpgrade 115.4
249 TestStoppedBinaryUpgrade/Setup 4.69
250 TestStoppedBinaryUpgrade/Upgrade 192.77
251 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
260 TestPause/serial/Start 59.65
261 TestPause/serial/SecondStartNoReconfiguration 40.11
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
264 TestNoKubernetes/serial/StartWithK8s 48.41
265 TestPause/serial/Pause 0.92
266 TestPause/serial/VerifyStatus 0.28
267 TestPause/serial/Unpause 0.8
268 TestPause/serial/PauseAgain 1.02
269 TestPause/serial/DeletePaused 0.86
270 TestPause/serial/VerifyDeletedResources 0.6
278 TestNetworkPlugins/group/false 3.36
284 TestNoKubernetes/serial/StartWithStopK8s 33.99
286 TestStartStop/group/no-preload/serial/FirstStart 92.02
287 TestNoKubernetes/serial/Start 45.29
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
289 TestNoKubernetes/serial/ProfileList 2.85
290 TestNoKubernetes/serial/Stop 1.29
291 TestNoKubernetes/serial/StartNoArgs 21.78
292 TestStartStop/group/no-preload/serial/DeployApp 11.29
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
295 TestStartStop/group/embed-certs/serial/FirstStart 84.29
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.92
300 TestStartStop/group/embed-certs/serial/DeployApp 12.28
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
303 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
309 TestStartStop/group/no-preload/serial/SecondStart 650.26
311 TestStartStop/group/old-k8s-version/serial/Stop 6.29
312 TestStartStop/group/embed-certs/serial/SecondStart 520.19
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 542.46
326 TestStartStop/group/newest-cni/serial/FirstStart 47.79
327 TestNetworkPlugins/group/auto/Start 53.05
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.38
330 TestStartStop/group/newest-cni/serial/Stop 10.64
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
332 TestStartStop/group/newest-cni/serial/SecondStart 42.48
333 TestNetworkPlugins/group/auto/KubeletFlags 0.21
334 TestNetworkPlugins/group/auto/NetCatPod 12.27
335 TestNetworkPlugins/group/auto/DNS 16.17
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
339 TestStartStop/group/newest-cni/serial/Pause 2.37
340 TestNetworkPlugins/group/kindnet/Start 66.06
341 TestNetworkPlugins/group/auto/Localhost 0.14
342 TestNetworkPlugins/group/auto/HairPin 0.15
343 TestNetworkPlugins/group/calico/Start 81.28
344 TestNetworkPlugins/group/custom-flannel/Start 101.69
345 TestNetworkPlugins/group/enable-default-cni/Start 117.71
346 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
348 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
349 TestNetworkPlugins/group/kindnet/DNS 0.2
350 TestNetworkPlugins/group/kindnet/Localhost 0.19
351 TestNetworkPlugins/group/kindnet/HairPin 0.21
352 TestNetworkPlugins/group/flannel/Start 80.55
353 TestNetworkPlugins/group/calico/ControllerPod 6.01
354 TestNetworkPlugins/group/calico/KubeletFlags 0.49
355 TestNetworkPlugins/group/calico/NetCatPod 11.66
356 TestNetworkPlugins/group/calico/DNS 0.17
357 TestNetworkPlugins/group/calico/Localhost 0.13
358 TestNetworkPlugins/group/calico/HairPin 0.13
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
361 TestNetworkPlugins/group/custom-flannel/DNS 0.17
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
364 TestNetworkPlugins/group/bridge/Start 59.39
365 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
367 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
368 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
369 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
372 TestNetworkPlugins/group/flannel/NetCatPod 11.29
373 TestNetworkPlugins/group/flannel/DNS 0.14
374 TestNetworkPlugins/group/flannel/Localhost 0.14
375 TestNetworkPlugins/group/flannel/HairPin 0.14
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
377 TestNetworkPlugins/group/bridge/NetCatPod 11.23
378 TestNetworkPlugins/group/bridge/DNS 0.15
379 TestNetworkPlugins/group/bridge/Localhost 0.13
380 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (40.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-603097 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-603097 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (40.768992135s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (40.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 00:15:24.616981   22138 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0927 00:15:24.617082   22138 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-603097
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-603097: exit status 85 (57.342994ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-603097 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |          |
	|         | -p download-only-603097        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:43.884803   22149 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:43.884932   22149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:43.884942   22149 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:43.884946   22149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:43.885110   22149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	W0927 00:14:43.885228   22149 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19711-14935/.minikube/config/config.json: open /home/jenkins/minikube-integration/19711-14935/.minikube/config/config.json: no such file or directory
	I0927 00:14:43.885779   22149 out.go:352] Setting JSON to true
	I0927 00:14:43.886664   22149 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3429,"bootTime":1727392655,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:43.886757   22149 start.go:139] virtualization: kvm guest
	I0927 00:14:43.889174   22149 out.go:97] [download-only-603097] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0927 00:14:43.889267   22149 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:14:43.889299   22149 notify.go:220] Checking for updates...
	I0927 00:14:43.890706   22149 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:43.891919   22149 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:43.893126   22149 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:14:43.894331   22149 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:14:43.895528   22149 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 00:14:43.897993   22149 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:14:43.898165   22149 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:43.999009   22149 out.go:97] Using the kvm2 driver based on user configuration
	I0927 00:14:43.999044   22149 start.go:297] selected driver: kvm2
	I0927 00:14:43.999051   22149 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:14:43.999409   22149 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:14:43.999552   22149 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:14:44.013563   22149 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:14:44.013601   22149 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:44.014144   22149 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0927 00:14:44.014330   22149 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:14:44.014360   22149 cni.go:84] Creating CNI manager for ""
	I0927 00:14:44.014425   22149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:14:44.014435   22149 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:14:44.014504   22149 start.go:340] cluster config:
	{Name:download-only-603097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-603097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:14:44.014693   22149 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:14:44.016466   22149 out.go:97] Downloading VM boot image ...
	I0927 00:14:44.016498   22149 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:14:58.808513   22149 out.go:97] Starting "download-only-603097" primary control-plane node in "download-only-603097" cluster
	I0927 00:14:58.808541   22149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 00:14:58.918369   22149 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0927 00:14:58.918402   22149 cache.go:56] Caching tarball of preloaded images
	I0927 00:14:58.918563   22149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 00:14:58.920395   22149 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 00:14:58.920414   22149 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0927 00:14:59.034505   22149 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0927 00:15:21.999353   22149 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0927 00:15:21.999457   22149 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0927 00:15:23.018669   22149 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0927 00:15:23.018999   22149 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/download-only-603097/config.json ...
	I0927 00:15:23.019025   22149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/download-only-603097/config.json: {Name:mk9aecd01b45a33b8b0d963644823c64efca47f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:23.019180   22149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 00:15:23.019406   22149 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-603097 host does not exist
	  To start a cluster, run: "minikube start -p download-only-603097"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-603097
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-528649 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-528649 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (18.399293514s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 00:15:43.334230   22138 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0927 00:15:43.334281   22138 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-528649
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-528649: exit status 85 (55.572808ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-603097 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-603097        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| delete  | -p download-only-603097        | download-only-603097 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| start   | -o=json --download-only        | download-only-528649 | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	|         | -p download-only-528649        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:15:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:15:24.971423   22457 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:15:24.971542   22457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:24.971553   22457 out.go:358] Setting ErrFile to fd 2...
	I0927 00:15:24.971559   22457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:24.971793   22457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:15:24.972348   22457 out.go:352] Setting JSON to true
	I0927 00:15:24.973154   22457 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3470,"bootTime":1727392655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:15:24.973249   22457 start.go:139] virtualization: kvm guest
	I0927 00:15:24.975855   22457 out.go:97] [download-only-528649] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:15:24.975956   22457 notify.go:220] Checking for updates...
	I0927 00:15:24.977354   22457 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:15:24.978905   22457 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:15:24.980353   22457 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:15:24.981637   22457 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:15:24.983036   22457 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 00:15:24.985665   22457 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:15:24.985881   22457 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:15:25.017470   22457 out.go:97] Using the kvm2 driver based on user configuration
	I0927 00:15:25.017504   22457 start.go:297] selected driver: kvm2
	I0927 00:15:25.017510   22457 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:15:25.017837   22457 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:25.017926   22457 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14935/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:15:25.033049   22457 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:15:25.033089   22457 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:15:25.033610   22457 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0927 00:15:25.033755   22457 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:15:25.033780   22457 cni.go:84] Creating CNI manager for ""
	I0927 00:15:25.033822   22457 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 00:15:25.033832   22457 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:15:25.033883   22457 start.go:340] cluster config:
	{Name:download-only-528649 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-528649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:25.033993   22457 iso.go:125] acquiring lock: {Name:mkc202a14fbe20838e31e7efc444c4f65351f9ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:15:25.035906   22457 out.go:97] Starting "download-only-528649" primary control-plane node in "download-only-528649" cluster
	I0927 00:15:25.035931   22457 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:15:25.629787   22457 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 00:15:25.629818   22457 cache.go:56] Caching tarball of preloaded images
	I0927 00:15:25.629970   22457 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:15:25.631828   22457 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0927 00:15:25.631852   22457 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0927 00:15:25.741644   22457 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19711-14935/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-528649 host does not exist
	  To start a cluster, run: "minikube start -p download-only-528649"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-528649
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 00:15:43.872836   22138 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-381196 --alsologtostderr --binary-mirror http://127.0.0.1:32921 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-381196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-381196
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (90.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-708374 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-708374 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.644334871s)
helpers_test.go:175: Cleaning up "offline-crio-708374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-708374
--- PASS: TestOffline (90.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-364775
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-364775: exit status 85 (53.793053ms)

                                                
                                                
-- stdout --
	* Profile "addons-364775" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-364775"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-364775
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-364775: exit status 85 (53.738537ms)

                                                
                                                
-- stdout --
	* Profile "addons-364775" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-364775"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-364775 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-364775 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m16.383890243s)
--- PASS: TestAddons/Setup (136.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-364775 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-364775 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-45jdz" [c6dbdac6-a36b-4afb-845c-62edd35a45d5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004635704s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-364775
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-364775: (5.799323305s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.104609ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-364775 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-364775 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0b4e3791-3900-456c-ac34-e5ed6d031d8f] Pending
helpers_test.go:344: "task-pv-pod" [0b4e3791-3900-456c-ac34-e5ed6d031d8f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0b4e3791-3900-456c-ac34-e5ed6d031d8f] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.005210082s
addons_test.go:528: (dbg) Run:  kubectl --context addons-364775 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-364775 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-364775 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-364775 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-364775 delete pod task-pv-pod: (1.175946596s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-364775 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-364775 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-364775 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fc0a1bc5-0771-4e5d-9175-9c3fe130a68a] Pending
helpers_test.go:344: "task-pv-pod-restore" [fc0a1bc5-0771-4e5d-9175-9c3fe130a68a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fc0a1bc5-0771-4e5d-9175-9c3fe130a68a] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004309681s
addons_test.go:570: (dbg) Run:  kubectl --context addons-364775 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-364775 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-364775 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.753566044s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-364775 --alsologtostderr -v=1
I0927 00:26:03.792912   22138 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-dkbd8" [fc44977d-fee1-4c61-897d-79f5bf6d0f4d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-dkbd8" [fc44977d-fee1-4c61-897d-79f5bf6d0f4d] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.004788322s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 addons disable headlamp --alsologtostderr -v=1: (5.897211695s)
--- PASS: TestAddons/parallel/Headlamp (23.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-wtrdk" [89e689ae-58ff-4ed7-98ad-e9bc0f622024] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004460687s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-364775
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-364775 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-364775 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-364775 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [19934d44-6957-4e22-a4ed-554922813c1b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [19934d44-6957-4e22-a4ed-554922813c1b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [19934d44-6957-4e22-a4ed-554922813c1b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005985065s
addons_test.go:938: (dbg) Run:  kubectl --context addons-364775 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 ssh "cat /opt/local-path-provisioner/pvc-eaf13455-05db-4681-afdd-103662b6f350_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-364775 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-364775 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.309794174s)
--- PASS: TestAddons/parallel/LocalPath (55.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gvjn8" [2de30fac-4d6c-4922-b784-e9801df8f16a] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005052016s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-364775
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-8mt2f" [6ed7f205-1ec1-4e07-8c5a-8375a7991a68] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00462825s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-364775 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-364775 addons disable yakd --alsologtostderr -v=1: (5.679310415s)
--- PASS: TestAddons/parallel/Yakd (10.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-364775
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-364775: (7.280133419s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-364775
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-364775
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-364775
--- PASS: TestAddons/StoppedEnableDisable (7.54s)

                                                
                                    
x
+
TestCertOptions (75.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-538570 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0927 01:28:01.244709   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-538570 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.39554128s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-538570 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-538570 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-538570 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-538570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-538570
--- PASS: TestCertOptions (75.82s)

                                                
                                    
x
+
TestCertExpiration (393.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-595331 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-595331 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m12.814027106s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-595331 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-595331 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (2m20.049605632s)
helpers_test.go:175: Cleaning up "cert-expiration-595331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-595331
--- PASS: TestCertExpiration (393.86s)

                                                
                                    
x
+
TestForceSystemdFlag (55.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-440297 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-440297 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.819993571s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-440297 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-440297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-440297
--- PASS: TestForceSystemdFlag (55.80s)

                                                
                                    
x
+
TestForceSystemdEnv (92.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-377499 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-377499 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m31.109495136s)
helpers_test.go:175: Cleaning up "force-systemd-env-377499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-377499
--- PASS: TestForceSystemdEnv (92.06s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.07s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0927 01:25:26.675135   22138 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 01:25:26.675273   22138 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0927 01:25:26.709568   22138 install.go:62] docker-machine-driver-kvm2: exit status 1
W0927 01:25:26.709881   22138 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 01:25:26.709948   22138 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2798563013/001/docker-machine-driver-kvm2
I0927 01:25:26.959605   22138 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2798563013/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc000897e20 gz:0xc000897e28 tar:0xc000897dd0 tar.bz2:0xc000897de0 tar.gz:0xc000897df0 tar.xz:0xc000897e00 tar.zst:0xc000897e10 tbz2:0xc000897de0 tgz:0xc000897df0 txz:0xc000897e00 tzst:0xc000897e10 xz:0xc000897e30 zip:0xc000897e40 zst:0xc000897e38] Getters:map[file:0xc001c40780 http:0xc0009ee0f0 https:0xc0009ee140] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0927 01:25:26.959649   22138 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2798563013/001/docker-machine-driver-kvm2
I0927 01:25:29.789791   22138 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 01:25:29.823106   22138 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0927 01:25:29.856952   22138 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0927 01:25:29.856991   22138 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0927 01:25:29.857057   22138 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 01:25:29.857082   22138 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2798563013/002/docker-machine-driver-kvm2
I0927 01:25:29.915192   22138 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2798563013/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc000897e20 gz:0xc000897e28 tar:0xc000897dd0 tar.bz2:0xc000897de0 tar.gz:0xc000897df0 tar.xz:0xc000897e00 tar.zst:0xc000897e10 tbz2:0xc000897de0 tgz:0xc000897df0 txz:0xc000897e00 tzst:0xc000897e10 xz:0xc000897e30 zip:0xc000897e40 zst:0xc000897e38] Getters:map[file:0xc001875150 http:0xc00185dea0 https:0xc00185def0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0927 01:25:29.915229   22138 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2798563013/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.07s)

                                                
                                    
x
+
TestErrorSpam/setup (42.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-148444 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-148444 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-148444 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-148444 --driver=kvm2  --container-runtime=crio: (42.199796974s)
--- PASS: TestErrorSpam/setup (42.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (4.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 stop: (2.312120521s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 stop: (1.201589724s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-148444 --log_dir /tmp/nospam-148444 stop: (1.284094074s)
--- PASS: TestErrorSpam/stop (4.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19711-14935/.minikube/files/etc/test/nested/copy/22138/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-774677 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0927 00:33:01.244688   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:01.251075   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:01.262446   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:01.283837   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:01.325342   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:01.406755   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:01.568332   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:01.890038   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:02.532115   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:03.813724   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:06.375742   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:11.497933   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:21.740167   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:42.221575   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-774677 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.042896916s)
--- PASS: TestFunctional/serial/StartWithProxy (83.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 00:33:48.677727   22138 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-774677 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-774677 --alsologtostderr -v=8: (33.027127595s)
functional_test.go:663: soft start took 33.027855166s for "functional-774677" cluster.
I0927 00:34:21.705226   22138 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-774677 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 cache add registry.k8s.io/pause:3.1: (1.147565548s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cache add registry.k8s.io/pause:3.3
E0927 00:34:23.184245   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 cache add registry.k8s.io/pause:3.3: (1.230254534s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 cache add registry.k8s.io/pause:latest: (1.134663631s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-774677 /tmp/TestFunctionalserialCacheCmdcacheadd_local3539935790/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cache add minikube-local-cache-test:functional-774677
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 cache add minikube-local-cache-test:functional-774677: (1.928305577s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cache delete minikube-local-cache-test:functional-774677
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-774677
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.883416ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 kubectl -- --context functional-774677 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-774677 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-774677 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-774677 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.994601646s)
functional_test.go:761: restart took 32.994697206s for "functional-774677" cluster.
I0927 00:35:02.838936   22138 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (32.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-774677 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 logs: (1.503101276s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 logs --file /tmp/TestFunctionalserialLogsFileCmd500650947/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 logs --file /tmp/TestFunctionalserialLogsFileCmd500650947/001/logs.txt: (1.500313587s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-774677 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-774677
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-774677: exit status 115 (269.177905ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.193:30108 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-774677 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-774677 delete -f testdata/invalidsvc.yaml: (1.110765261s)
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 config get cpus: exit status 14 (53.325578ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 config get cpus: exit status 14 (43.075659ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-774677 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-774677 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 31956: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-774677 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-774677 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.436429ms)

                                                
                                                
-- stdout --
	* [functional-774677] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:35:13.236650   31858 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:35:13.237055   31858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:35:13.237063   31858 out.go:358] Setting ErrFile to fd 2...
	I0927 00:35:13.237071   31858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:35:13.237589   31858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:35:13.238274   31858 out.go:352] Setting JSON to false
	I0927 00:35:13.239268   31858 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4658,"bootTime":1727392655,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:35:13.239494   31858 start.go:139] virtualization: kvm guest
	I0927 00:35:13.241864   31858 out.go:177] * [functional-774677] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:35:13.243649   31858 notify.go:220] Checking for updates...
	I0927 00:35:13.243685   31858 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:35:13.245066   31858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:35:13.246409   31858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:35:13.247785   31858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:35:13.249118   31858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:35:13.250304   31858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:35:13.252081   31858 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:35:13.252493   31858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:35:13.252544   31858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:35:13.272916   31858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0927 00:35:13.273650   31858 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:35:13.274336   31858 main.go:141] libmachine: Using API Version  1
	I0927 00:35:13.274353   31858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:35:13.274788   31858 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:35:13.274961   31858 main.go:141] libmachine: (functional-774677) Calling .DriverName
	I0927 00:35:13.275192   31858 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:35:13.275591   31858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:35:13.275625   31858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:35:13.290291   31858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
	I0927 00:35:13.290670   31858 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:35:13.291085   31858 main.go:141] libmachine: Using API Version  1
	I0927 00:35:13.291101   31858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:35:13.291394   31858 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:35:13.291569   31858 main.go:141] libmachine: (functional-774677) Calling .DriverName
	I0927 00:35:13.326151   31858 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 00:35:13.328198   31858 start.go:297] selected driver: kvm2
	I0927 00:35:13.328218   31858 start.go:901] validating driver "kvm2" against &{Name:functional-774677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-774677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:35:13.328367   31858 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:35:13.330924   31858 out.go:201] 
	W0927 00:35:13.332454   31858 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 00:35:13.333939   31858 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-774677 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-774677 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-774677 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.059845ms)

                                                
                                                
-- stdout --
	* [functional-774677] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:35:13.105344   31814 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:35:13.105554   31814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:35:13.105569   31814 out.go:358] Setting ErrFile to fd 2...
	I0927 00:35:13.105576   31814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:35:13.105973   31814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 00:35:13.106709   31814 out.go:352] Setting JSON to false
	I0927 00:35:13.107989   31814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4658,"bootTime":1727392655,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:35:13.108113   31814 start.go:139] virtualization: kvm guest
	I0927 00:35:13.110190   31814 out.go:177] * [functional-774677] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0927 00:35:13.111538   31814 notify.go:220] Checking for updates...
	I0927 00:35:13.111572   31814 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:35:13.113061   31814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:35:13.114527   31814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 00:35:13.115914   31814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 00:35:13.117313   31814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:35:13.118616   31814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:35:13.120269   31814 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:35:13.120708   31814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:35:13.120762   31814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:35:13.135941   31814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45217
	I0927 00:35:13.136375   31814 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:35:13.137033   31814 main.go:141] libmachine: Using API Version  1
	I0927 00:35:13.137059   31814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:35:13.137445   31814 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:35:13.137626   31814 main.go:141] libmachine: (functional-774677) Calling .DriverName
	I0927 00:35:13.137851   31814 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:35:13.138186   31814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 00:35:13.138229   31814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:35:13.153456   31814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0927 00:35:13.153953   31814 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:35:13.154443   31814 main.go:141] libmachine: Using API Version  1
	I0927 00:35:13.154467   31814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:35:13.154818   31814 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:35:13.154984   31814 main.go:141] libmachine: (functional-774677) Calling .DriverName
	I0927 00:35:13.186866   31814 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0927 00:35:13.188222   31814 start.go:297] selected driver: kvm2
	I0927 00:35:13.188242   31814 start.go:901] validating driver "kvm2" against &{Name:functional-774677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-774677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:35:13.188348   31814 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:35:13.190422   31814 out.go:201] 
	W0927 00:35:13.191839   31814 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 00:35:13.193021   31814 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-774677 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-774677 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k94kq" [d0d20834-1b44-4915-bbb7-a03df3c5aa56] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k94kq" [d0d20834-1b44-4915-bbb7-a03df3c5aa56] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004896238s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.193:30708
functional_test.go:1675: http://192.168.39.193:30708: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-k94kq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.193:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.193:30708
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [28914621-6fce-49f3-88c9-1341a5290170] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003862391s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-774677 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-774677 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-774677 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-774677 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9bb3d485-c9b2-407f-afd2-4b82de62db5e] Pending
helpers_test.go:344: "sp-pod" [9bb3d485-c9b2-407f-afd2-4b82de62db5e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9bb3d485-c9b2-407f-afd2-4b82de62db5e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.003608724s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-774677 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-774677 delete -f testdata/storage-provisioner/pod.yaml
E0927 00:35:45.106399   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-774677 delete -f testdata/storage-provisioner/pod.yaml: (5.692956451s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-774677 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ef85d03a-0a72-47d7-92d0-00cba3fbd499] Pending
helpers_test.go:344: "sp-pod" [ef85d03a-0a72-47d7-92d0-00cba3fbd499] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ef85d03a-0a72-47d7-92d0-00cba3fbd499] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004796844s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-774677 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh -n functional-774677 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cp functional-774677:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4034848732/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh -n functional-774677 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh -n functional-774677 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-774677 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-rg2j7" [b9024cf9-8573-4d0f-974d-644b7ed667dc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-rg2j7" [b9024cf9-8573-4d0f-974d-644b7ed667dc] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004738654s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-774677 exec mysql-6cdb49bbb-rg2j7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-774677 exec mysql-6cdb49bbb-rg2j7 -- mysql -ppassword -e "show databases;": exit status 1 (146.543913ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:35:56.132720   22138 retry.go:31] will retry after 1.199251094s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-774677 exec mysql-6cdb49bbb-rg2j7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-774677 exec mysql-6cdb49bbb-rg2j7 -- mysql -ppassword -e "show databases;": exit status 1 (120.814665ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:35:57.453194   22138 retry.go:31] will retry after 1.888303273s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-774677 exec mysql-6cdb49bbb-rg2j7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/22138/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo cat /etc/test/nested/copy/22138/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/22138.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo cat /etc/ssl/certs/22138.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/22138.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo cat /usr/share/ca-certificates/22138.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/221382.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo cat /etc/ssl/certs/221382.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/221382.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo cat /usr/share/ca-certificates/221382.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-774677 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 ssh "sudo systemctl is-active docker": exit status 1 (232.513022ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 ssh "sudo systemctl is-active containerd": exit status 1 (221.710948ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-774677 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-774677 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nmnpj" [48c5ebf4-7b0f-419a-8e66-fe9b058bccbc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nmnpj" [48c5ebf4-7b0f-419a-8e66-fe9b058bccbc] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004119172s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "273.369727ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.410855ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "396.284119ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "60.741295ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdany-port1556402722/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727397311788984789" to /tmp/TestFunctionalparallelMountCmdany-port1556402722/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727397311788984789" to /tmp/TestFunctionalparallelMountCmdany-port1556402722/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727397311788984789" to /tmp/TestFunctionalparallelMountCmdany-port1556402722/001/test-1727397311788984789
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.877245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:35:12.020163   22138 retry.go:31] will retry after 484.901188ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 00:35 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 00:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 00:35 test-1727397311788984789
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh cat /mount-9p/test-1727397311788984789
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-774677 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3d3c3d79-2184-4220-a534-e6614686edb2] Pending
helpers_test.go:344: "busybox-mount" [3d3c3d79-2184-4220-a534-e6614686edb2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3d3c3d79-2184-4220-a534-e6614686edb2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3d3c3d79-2184-4220-a534-e6614686edb2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003660371s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-774677 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdany-port1556402722/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 service list -o json
functional_test.go:1494: Took "447.898907ms" to run "out/minikube-linux-amd64 -p functional-774677 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdspecific-port2676572113/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (182.397323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:35:22.594742   22138 retry.go:31] will retry after 606.32715ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdspecific-port2676572113/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 ssh "sudo umount -f /mount-9p": exit status 1 (197.667656ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-774677 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdspecific-port2676572113/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.193:32579
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.193:32579
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122743310/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122743310/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122743310/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-774677 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122743310/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122743310/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-774677 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122743310/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-774677 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-774677
localhost/kicbase/echo-server:functional-774677
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-774677 image ls --format short --alsologtostderr:
I0927 00:35:38.177981   33599 out.go:345] Setting OutFile to fd 1 ...
I0927 00:35:38.178102   33599 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.178114   33599 out.go:358] Setting ErrFile to fd 2...
I0927 00:35:38.178120   33599 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.178396   33599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
I0927 00:35:38.179013   33599 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.179110   33599 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.179503   33599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.179539   33599 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.193662   33599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
I0927 00:35:38.194048   33599 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.194650   33599 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.194676   33599 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.195021   33599 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.195192   33599 main.go:141] libmachine: (functional-774677) Calling .GetState
I0927 00:35:38.197044   33599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.197087   33599 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.211109   33599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45443
I0927 00:35:38.211692   33599 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.212133   33599 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.212151   33599 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.212527   33599 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.212690   33599 main.go:141] libmachine: (functional-774677) Calling .DriverName
I0927 00:35:38.212876   33599 ssh_runner.go:195] Run: systemctl --version
I0927 00:35:38.212902   33599 main.go:141] libmachine: (functional-774677) Calling .GetSSHHostname
I0927 00:35:38.215831   33599 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.216190   33599 main.go:141] libmachine: (functional-774677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ce:25", ip: ""} in network mk-functional-774677: {Iface:virbr1 ExpiryTime:2024-09-27 01:32:40 +0000 UTC Type:0 Mac:52:54:00:ae:ce:25 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-774677 Clientid:01:52:54:00:ae:ce:25}
I0927 00:35:38.216216   33599 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined IP address 192.168.39.193 and MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.216307   33599 main.go:141] libmachine: (functional-774677) Calling .GetSSHPort
I0927 00:35:38.216482   33599 main.go:141] libmachine: (functional-774677) Calling .GetSSHKeyPath
I0927 00:35:38.216611   33599 main.go:141] libmachine: (functional-774677) Calling .GetSSHUsername
I0927 00:35:38.216740   33599 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/functional-774677/id_rsa Username:docker}
I0927 00:35:38.299159   33599 ssh_runner.go:195] Run: sudo crictl images --output json
I0927 00:35:38.348557   33599 main.go:141] libmachine: Making call to close driver server
I0927 00:35:38.348575   33599 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:38.348830   33599 main.go:141] libmachine: (functional-774677) DBG | Closing plugin on server side
I0927 00:35:38.348871   33599 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:38.348885   33599 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:35:38.348891   33599 main.go:141] libmachine: Making call to close driver server
I0927 00:35:38.348898   33599 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:38.349111   33599 main.go:141] libmachine: (functional-774677) DBG | Closing plugin on server side
I0927 00:35:38.349148   33599 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:38.349160   33599 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-774677 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| localhost/minikube-local-cache-test     | functional-774677  | c19090e0d21ec | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| localhost/kicbase/echo-server           | functional-774677  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-774677 image ls --format table --alsologtostderr:
I0927 00:35:38.646577   33711 out.go:345] Setting OutFile to fd 1 ...
I0927 00:35:38.646684   33711 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.646692   33711 out.go:358] Setting ErrFile to fd 2...
I0927 00:35:38.646697   33711 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.646850   33711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
I0927 00:35:38.647483   33711 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.647585   33711 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.647941   33711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.647982   33711 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.664498   33711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
I0927 00:35:38.664854   33711 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.665641   33711 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.665665   33711 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.666059   33711 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.666272   33711 main.go:141] libmachine: (functional-774677) Calling .GetState
I0927 00:35:38.668124   33711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.668171   33711 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.684121   33711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
I0927 00:35:38.684580   33711 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.685083   33711 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.685102   33711 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.685410   33711 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.685580   33711 main.go:141] libmachine: (functional-774677) Calling .DriverName
I0927 00:35:38.685782   33711 ssh_runner.go:195] Run: systemctl --version
I0927 00:35:38.685810   33711 main.go:141] libmachine: (functional-774677) Calling .GetSSHHostname
I0927 00:35:38.688814   33711 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.689189   33711 main.go:141] libmachine: (functional-774677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ce:25", ip: ""} in network mk-functional-774677: {Iface:virbr1 ExpiryTime:2024-09-27 01:32:40 +0000 UTC Type:0 Mac:52:54:00:ae:ce:25 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-774677 Clientid:01:52:54:00:ae:ce:25}
I0927 00:35:38.689220   33711 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined IP address 192.168.39.193 and MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.689326   33711 main.go:141] libmachine: (functional-774677) Calling .GetSSHPort
I0927 00:35:38.689469   33711 main.go:141] libmachine: (functional-774677) Calling .GetSSHKeyPath
I0927 00:35:38.689659   33711 main.go:141] libmachine: (functional-774677) Calling .GetSSHUsername
I0927 00:35:38.689791   33711 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/functional-774677/id_rsa Username:docker}
I0927 00:35:38.820525   33711 ssh_runner.go:195] Run: sudo crictl images --output json
I0927 00:35:39.079589   33711 main.go:141] libmachine: Making call to close driver server
I0927 00:35:39.079608   33711 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:39.079881   33711 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:39.079889   33711 main.go:141] libmachine: (functional-774677) DBG | Closing plugin on server side
I0927 00:35:39.079900   33711 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:35:39.079945   33711 main.go:141] libmachine: Making call to close driver server
I0927 00:35:39.079953   33711 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:39.080244   33711 main.go:141] libmachine: (functional-774677) DBG | Closing plugin on server side
I0927 00:35:39.080342   33711 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:39.080372   33711 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-774677 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42
d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-774677"],"size":"4943877"},{"id":"115053965e86b2df4d78af78d7951b86
44839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"56cc512116c8f894f11c
e1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c19090e0d21ecce06e7eb692c1a831259810feb7069339aad8fb13a6d21e7482","repoDigests":["localhost/minikube-local-cache-test@sha256:2378fdfb9f8693cf5e137cc7ead8677c28df8c0adce465aefa2aad35a6c8e8e6"],"repoTags":["localhost/minikube-local-cache-test:functional-774677"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"6bab7719df1001fdcc7
e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78
f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05
d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-774677 image ls --format json --alsologtostderr:
I0927 00:35:38.410520   33656 out.go:345] Setting OutFile to fd 1 ...
I0927 00:35:38.410767   33656 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.410777   33656 out.go:358] Setting ErrFile to fd 2...
I0927 00:35:38.410781   33656 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.410988   33656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
I0927 00:35:38.411577   33656 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.411685   33656 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.412020   33656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.412064   33656 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.426735   33656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
I0927 00:35:38.427084   33656 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.427670   33656 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.427694   33656 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.428011   33656 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.428186   33656 main.go:141] libmachine: (functional-774677) Calling .GetState
I0927 00:35:38.429763   33656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.429800   33656 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.444258   33656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
I0927 00:35:38.444642   33656 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.445079   33656 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.445102   33656 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.445376   33656 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.445549   33656 main.go:141] libmachine: (functional-774677) Calling .DriverName
I0927 00:35:38.445742   33656 ssh_runner.go:195] Run: systemctl --version
I0927 00:35:38.445771   33656 main.go:141] libmachine: (functional-774677) Calling .GetSSHHostname
I0927 00:35:38.448058   33656 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.448379   33656 main.go:141] libmachine: (functional-774677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ce:25", ip: ""} in network mk-functional-774677: {Iface:virbr1 ExpiryTime:2024-09-27 01:32:40 +0000 UTC Type:0 Mac:52:54:00:ae:ce:25 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-774677 Clientid:01:52:54:00:ae:ce:25}
I0927 00:35:38.448406   33656 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined IP address 192.168.39.193 and MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.448469   33656 main.go:141] libmachine: (functional-774677) Calling .GetSSHPort
I0927 00:35:38.448621   33656 main.go:141] libmachine: (functional-774677) Calling .GetSSHKeyPath
I0927 00:35:38.448769   33656 main.go:141] libmachine: (functional-774677) Calling .GetSSHUsername
I0927 00:35:38.448884   33656 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/functional-774677/id_rsa Username:docker}
I0927 00:35:38.540872   33656 ssh_runner.go:195] Run: sudo crictl images --output json
I0927 00:35:38.597698   33656 main.go:141] libmachine: Making call to close driver server
I0927 00:35:38.597709   33656 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:38.597973   33656 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:38.597995   33656 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:35:38.598002   33656 main.go:141] libmachine: (functional-774677) DBG | Closing plugin on server side
I0927 00:35:38.598004   33656 main.go:141] libmachine: Making call to close driver server
I0927 00:35:38.598048   33656 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:38.598270   33656 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:38.598283   33656 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-774677 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-774677
size: "4943877"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: c19090e0d21ecce06e7eb692c1a831259810feb7069339aad8fb13a6d21e7482
repoDigests:
- localhost/minikube-local-cache-test@sha256:2378fdfb9f8693cf5e137cc7ead8677c28df8c0adce465aefa2aad35a6c8e8e6
repoTags:
- localhost/minikube-local-cache-test:functional-774677
size: "3330"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-774677 image ls --format yaml --alsologtostderr:
I0927 00:35:38.176270   33600 out.go:345] Setting OutFile to fd 1 ...
I0927 00:35:38.176416   33600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.176427   33600 out.go:358] Setting ErrFile to fd 2...
I0927 00:35:38.176433   33600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.176703   33600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
I0927 00:35:38.177519   33600 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.177679   33600 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.178228   33600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.178280   33600 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.192732   33600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
I0927 00:35:38.193231   33600 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.193922   33600 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.193947   33600 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.194294   33600 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.194487   33600 main.go:141] libmachine: (functional-774677) Calling .GetState
I0927 00:35:38.196841   33600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.196882   33600 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.211221   33600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
I0927 00:35:38.211685   33600 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.212161   33600 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.212180   33600 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.212697   33600 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.212888   33600 main.go:141] libmachine: (functional-774677) Calling .DriverName
I0927 00:35:38.213062   33600 ssh_runner.go:195] Run: systemctl --version
I0927 00:35:38.213100   33600 main.go:141] libmachine: (functional-774677) Calling .GetSSHHostname
I0927 00:35:38.216112   33600 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.216507   33600 main.go:141] libmachine: (functional-774677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ce:25", ip: ""} in network mk-functional-774677: {Iface:virbr1 ExpiryTime:2024-09-27 01:32:40 +0000 UTC Type:0 Mac:52:54:00:ae:ce:25 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-774677 Clientid:01:52:54:00:ae:ce:25}
I0927 00:35:38.216572   33600 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined IP address 192.168.39.193 and MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.216707   33600 main.go:141] libmachine: (functional-774677) Calling .GetSSHPort
I0927 00:35:38.216895   33600 main.go:141] libmachine: (functional-774677) Calling .GetSSHKeyPath
I0927 00:35:38.217020   33600 main.go:141] libmachine: (functional-774677) Calling .GetSSHUsername
I0927 00:35:38.217144   33600 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/functional-774677/id_rsa Username:docker}
I0927 00:35:38.299002   33600 ssh_runner.go:195] Run: sudo crictl images --output json
I0927 00:35:38.364392   33600 main.go:141] libmachine: Making call to close driver server
I0927 00:35:38.364405   33600 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:38.364655   33600 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:38.364677   33600 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:35:38.364684   33600 main.go:141] libmachine: (functional-774677) DBG | Closing plugin on server side
I0927 00:35:38.364691   33600 main.go:141] libmachine: Making call to close driver server
I0927 00:35:38.364698   33600 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:38.364923   33600 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:38.364933   33600 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (11.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-774677 ssh pgrep buildkitd: exit status 1 (190.97215ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image build -t localhost/my-image:functional-774677 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 image build -t localhost/my-image:functional-774677 testdata/build --alsologtostderr: (11.129919809s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-774677 image build -t localhost/my-image:functional-774677 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3843329501e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-774677
--> b85d6589c36
Successfully tagged localhost/my-image:functional-774677
b85d6589c36a6c4a0f74ca5acae8a0fe5e2e96a28b42be112b32d658abd667de
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-774677 image build -t localhost/my-image:functional-774677 testdata/build --alsologtostderr:
I0927 00:35:38.589970   33699 out.go:345] Setting OutFile to fd 1 ...
I0927 00:35:38.590116   33699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.590127   33699 out.go:358] Setting ErrFile to fd 2...
I0927 00:35:38.590132   33699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:35:38.590359   33699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
I0927 00:35:38.590966   33699 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.591541   33699 config.go:182] Loaded profile config "functional-774677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:35:38.591940   33699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.591981   33699 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.609385   33699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
I0927 00:35:38.610205   33699 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.610892   33699 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.610927   33699 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.611315   33699 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.611496   33699 main.go:141] libmachine: (functional-774677) Calling .GetState
I0927 00:35:38.613621   33699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 00:35:38.613668   33699 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:35:38.629037   33699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
I0927 00:35:38.629563   33699 main.go:141] libmachine: () Calling .GetVersion
I0927 00:35:38.630156   33699 main.go:141] libmachine: Using API Version  1
I0927 00:35:38.630176   33699 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:35:38.630572   33699 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:35:38.630783   33699 main.go:141] libmachine: (functional-774677) Calling .DriverName
I0927 00:35:38.630975   33699 ssh_runner.go:195] Run: systemctl --version
I0927 00:35:38.631003   33699 main.go:141] libmachine: (functional-774677) Calling .GetSSHHostname
I0927 00:35:38.633876   33699 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.634261   33699 main.go:141] libmachine: (functional-774677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ce:25", ip: ""} in network mk-functional-774677: {Iface:virbr1 ExpiryTime:2024-09-27 01:32:40 +0000 UTC Type:0 Mac:52:54:00:ae:ce:25 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:functional-774677 Clientid:01:52:54:00:ae:ce:25}
I0927 00:35:38.634294   33699 main.go:141] libmachine: (functional-774677) DBG | domain functional-774677 has defined IP address 192.168.39.193 and MAC address 52:54:00:ae:ce:25 in network mk-functional-774677
I0927 00:35:38.634431   33699 main.go:141] libmachine: (functional-774677) Calling .GetSSHPort
I0927 00:35:38.634581   33699 main.go:141] libmachine: (functional-774677) Calling .GetSSHKeyPath
I0927 00:35:38.634745   33699 main.go:141] libmachine: (functional-774677) Calling .GetSSHUsername
I0927 00:35:38.634878   33699 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/functional-774677/id_rsa Username:docker}
I0927 00:35:38.754675   33699 build_images.go:161] Building image from path: /tmp/build.2251596443.tar
I0927 00:35:38.754745   33699 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 00:35:38.782678   33699 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2251596443.tar
I0927 00:35:38.788126   33699 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2251596443.tar: stat -c "%s %y" /var/lib/minikube/build/build.2251596443.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2251596443.tar': No such file or directory
I0927 00:35:38.788158   33699 ssh_runner.go:362] scp /tmp/build.2251596443.tar --> /var/lib/minikube/build/build.2251596443.tar (3072 bytes)
I0927 00:35:38.847445   33699 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2251596443
I0927 00:35:38.898386   33699 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2251596443 -xf /var/lib/minikube/build/build.2251596443.tar
I0927 00:35:38.909411   33699 crio.go:315] Building image: /var/lib/minikube/build/build.2251596443
I0927 00:35:38.909467   33699 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-774677 /var/lib/minikube/build/build.2251596443 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0927 00:35:49.638253   33699 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-774677 /var/lib/minikube/build/build.2251596443 --cgroup-manager=cgroupfs: (10.728762027s)
I0927 00:35:49.638327   33699 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2251596443
I0927 00:35:49.654856   33699 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2251596443.tar
I0927 00:35:49.671261   33699 build_images.go:217] Built localhost/my-image:functional-774677 from /tmp/build.2251596443.tar
I0927 00:35:49.671298   33699 build_images.go:133] succeeded building to: functional-774677
I0927 00:35:49.671346   33699 build_images.go:134] failed building to: 
I0927 00:35:49.671373   33699 main.go:141] libmachine: Making call to close driver server
I0927 00:35:49.671392   33699 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:49.671647   33699 main.go:141] libmachine: (functional-774677) DBG | Closing plugin on server side
I0927 00:35:49.671694   33699 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:49.671703   33699 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:35:49.671717   33699 main.go:141] libmachine: Making call to close driver server
I0927 00:35:49.671727   33699 main.go:141] libmachine: (functional-774677) Calling .Close
I0927 00:35:49.671939   33699 main.go:141] libmachine: (functional-774677) DBG | Closing plugin on server side
I0927 00:35:49.671970   33699 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:35:49.671986   33699 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (11.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.626844896s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-774677
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image load --daemon kicbase/echo-server:functional-774677 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 image load --daemon kicbase/echo-server:functional-774677 --alsologtostderr: (1.19011895s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image load --daemon kicbase/echo-server:functional-774677 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
2024/09/27 00:35:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-774677
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image load --daemon kicbase/echo-server:functional-774677 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image save kicbase/echo-server:functional-774677 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-774677 image save kicbase/echo-server:functional-774677 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.998869181s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image rm kicbase/echo-server:functional-774677 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-774677
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-774677 image save --daemon kicbase/echo-server:functional-774677 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-774677
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-774677
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-774677
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-774677
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-631834 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0927 00:38:01.244740   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:28.948808   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-631834 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m16.412257895s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (197.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-631834 -- rollout status deployment/busybox: (5.22561766s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-bkws6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-dhthf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-hczmj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-bkws6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-dhthf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-hczmj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-bkws6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-dhthf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-hczmj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-bkws6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-bkws6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-dhthf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-dhthf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-hczmj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-631834 -- exec busybox-7dff88458-hczmj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-631834 -v=7 --alsologtostderr
E0927 00:40:10.487187   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:10.493528   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:10.504853   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:10.526221   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:10.568485   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:10.649896   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:10.811411   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:11.133335   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:11.775006   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:13.056280   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:15.617826   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:40:20.740001   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-631834 -v=7 --alsologtostderr: (55.607304139s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-631834 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp testdata/cp-test.txt ha-631834:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834:/home/docker/cp-test.txt ha-631834-m02:/home/docker/cp-test_ha-631834_ha-631834-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m02 "sudo cat /home/docker/cp-test_ha-631834_ha-631834-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834:/home/docker/cp-test.txt ha-631834-m03:/home/docker/cp-test_ha-631834_ha-631834-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m03 "sudo cat /home/docker/cp-test_ha-631834_ha-631834-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834:/home/docker/cp-test.txt ha-631834-m04:/home/docker/cp-test_ha-631834_ha-631834-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m04 "sudo cat /home/docker/cp-test_ha-631834_ha-631834-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp testdata/cp-test.txt ha-631834-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m02:/home/docker/cp-test.txt ha-631834:/home/docker/cp-test_ha-631834-m02_ha-631834.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834 "sudo cat /home/docker/cp-test_ha-631834-m02_ha-631834.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m02:/home/docker/cp-test.txt ha-631834-m03:/home/docker/cp-test_ha-631834-m02_ha-631834-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m03 "sudo cat /home/docker/cp-test_ha-631834-m02_ha-631834-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m02:/home/docker/cp-test.txt ha-631834-m04:/home/docker/cp-test_ha-631834-m02_ha-631834-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m04 "sudo cat /home/docker/cp-test_ha-631834-m02_ha-631834-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp testdata/cp-test.txt ha-631834-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m03 "sudo cat /home/docker/cp-test.txt"
E0927 00:40:30.981353   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt ha-631834:/home/docker/cp-test_ha-631834-m03_ha-631834.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834 "sudo cat /home/docker/cp-test_ha-631834-m03_ha-631834.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt ha-631834-m02:/home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m02 "sudo cat /home/docker/cp-test_ha-631834-m03_ha-631834-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m03:/home/docker/cp-test.txt ha-631834-m04:/home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m04 "sudo cat /home/docker/cp-test_ha-631834-m03_ha-631834-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp testdata/cp-test.txt ha-631834-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile381097914/001/cp-test_ha-631834-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt ha-631834:/home/docker/cp-test_ha-631834-m04_ha-631834.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834 "sudo cat /home/docker/cp-test_ha-631834-m04_ha-631834.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt ha-631834-m02:/home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m02 "sudo cat /home/docker/cp-test_ha-631834-m04_ha-631834-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 cp ha-631834-m04:/home/docker/cp-test.txt ha-631834-m03:/home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 ssh -n ha-631834-m03 "sudo cat /home/docker/cp-test_ha-631834-m04_ha-631834-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.168884688s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 node delete m03 -v=7 --alsologtostderr
E0927 00:49:24.310414   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-631834 node delete m03 -v=7 --alsologtostderr: (16.506631273s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (343.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-631834 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0927 00:53:01.245451   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:55:10.487134   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:56:33.553031   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-631834 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m42.56897875s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (343.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-631834 --control-plane -v=7 --alsologtostderr
E0927 00:58:01.245371   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-631834 --control-plane -v=7 --alsologtostderr: (1m18.769942306s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-631834 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-289881 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0927 01:00:10.487801   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-289881 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.460440102s)
--- PASS: TestJSONOutput/start/Command (81.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-289881 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-289881 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-289881 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-289881 --output=json --user=testUser: (7.353789475s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-283874 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-283874 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.631828ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"07a34ead-93c0-40fa-85e8-d05ace510b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-283874] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fefb8771-e14c-4d82-bba9-5b1d8a846f00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"4094e67c-4a1a-4d5f-8d75-82b9627fe5d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80b5c380-050d-40c3-bce7-79337ab73476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig"}}
	{"specversion":"1.0","id":"3f9e9652-a67a-42cc-9178-3597ee4ef333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube"}}
	{"specversion":"1.0","id":"993250cd-39e3-4ad9-9c98-4242e04a4b7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"776a7cab-dcc5-46d8-ac59-5db200c61d01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3c9b4d56-1f36-4c95-a357-5ea0fe8e5e80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-283874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-283874
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-063053 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-063053 --driver=kvm2  --container-runtime=crio: (44.464935875s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-074786 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-074786 --driver=kvm2  --container-runtime=crio: (40.484456728s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-063053
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-074786
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-074786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-074786
helpers_test.go:175: Cleaning up "first-063053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-063053
--- PASS: TestMinikubeProfile (87.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-008401 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-008401 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.111155269s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-008401 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-008401 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-021147 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0927 01:03:01.244614   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-021147 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.84220735s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-021147 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-021147 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-008401 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-021147 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-021147 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-021147
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-021147: (1.263178613s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-021147
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-021147: (22.35008538s)
--- PASS: TestMountStart/serial/RestartStopped (23.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-021147 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-021147 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-833343 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0927 01:05:10.487249   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-833343 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.908204548s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-833343 -- rollout status deployment/busybox: (4.906818082s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-5gdbb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-cv7gx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-5gdbb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-cv7gx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-5gdbb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-cv7gx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-5gdbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-5gdbb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-cv7gx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-833343 -- exec busybox-7dff88458-cv7gx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-833343 -v 3 --alsologtostderr
E0927 01:06:04.311767   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-833343 -v 3 --alsologtostderr: (49.615361855s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-833343 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp testdata/cp-test.txt multinode-833343:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3824164229/001/cp-test_multinode-833343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343:/home/docker/cp-test.txt multinode-833343-m02:/home/docker/cp-test_multinode-833343_multinode-833343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m02 "sudo cat /home/docker/cp-test_multinode-833343_multinode-833343-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343:/home/docker/cp-test.txt multinode-833343-m03:/home/docker/cp-test_multinode-833343_multinode-833343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m03 "sudo cat /home/docker/cp-test_multinode-833343_multinode-833343-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp testdata/cp-test.txt multinode-833343-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3824164229/001/cp-test_multinode-833343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343-m02:/home/docker/cp-test.txt multinode-833343:/home/docker/cp-test_multinode-833343-m02_multinode-833343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343 "sudo cat /home/docker/cp-test_multinode-833343-m02_multinode-833343.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343-m02:/home/docker/cp-test.txt multinode-833343-m03:/home/docker/cp-test_multinode-833343-m02_multinode-833343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m03 "sudo cat /home/docker/cp-test_multinode-833343-m02_multinode-833343-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp testdata/cp-test.txt multinode-833343-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3824164229/001/cp-test_multinode-833343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt multinode-833343:/home/docker/cp-test_multinode-833343-m03_multinode-833343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343 "sudo cat /home/docker/cp-test_multinode-833343-m03_multinode-833343.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 cp multinode-833343-m03:/home/docker/cp-test.txt multinode-833343-m02:/home/docker/cp-test_multinode-833343-m03_multinode-833343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 ssh -n multinode-833343-m02 "sudo cat /home/docker/cp-test_multinode-833343-m03_multinode-833343-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-833343 node stop m03: (1.435892307s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-833343 status: exit status 7 (409.2373ms)

                                                
                                                
-- stdout --
	multinode-833343
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-833343-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-833343-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr: exit status 7 (412.274543ms)

                                                
                                                
-- stdout --
	multinode-833343
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-833343-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-833343-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:06:32.197055   50694 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:06:32.197195   50694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:06:32.197205   50694 out.go:358] Setting ErrFile to fd 2...
	I0927 01:06:32.197209   50694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:06:32.197404   50694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:06:32.197607   50694 out.go:352] Setting JSON to false
	I0927 01:06:32.197632   50694 mustload.go:65] Loading cluster: multinode-833343
	I0927 01:06:32.197737   50694 notify.go:220] Checking for updates...
	I0927 01:06:32.198092   50694 config.go:182] Loaded profile config "multinode-833343": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:06:32.198117   50694 status.go:174] checking status of multinode-833343 ...
	I0927 01:06:32.198550   50694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:06:32.198597   50694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:06:32.217318   50694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0927 01:06:32.217712   50694 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:06:32.218192   50694 main.go:141] libmachine: Using API Version  1
	I0927 01:06:32.218214   50694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:06:32.218570   50694 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:06:32.218760   50694 main.go:141] libmachine: (multinode-833343) Calling .GetState
	I0927 01:06:32.220160   50694 status.go:364] multinode-833343 host status = "Running" (err=<nil>)
	I0927 01:06:32.220175   50694 host.go:66] Checking if "multinode-833343" exists ...
	I0927 01:06:32.220571   50694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:06:32.220610   50694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:06:32.235758   50694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41557
	I0927 01:06:32.236229   50694 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:06:32.236665   50694 main.go:141] libmachine: Using API Version  1
	I0927 01:06:32.236684   50694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:06:32.237011   50694 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:06:32.237174   50694 main.go:141] libmachine: (multinode-833343) Calling .GetIP
	I0927 01:06:32.239920   50694 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:06:32.240293   50694 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:06:32.240325   50694 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:06:32.240432   50694 host.go:66] Checking if "multinode-833343" exists ...
	I0927 01:06:32.240718   50694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:06:32.240751   50694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:06:32.255601   50694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I0927 01:06:32.255962   50694 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:06:32.256383   50694 main.go:141] libmachine: Using API Version  1
	I0927 01:06:32.256402   50694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:06:32.256721   50694 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:06:32.256874   50694 main.go:141] libmachine: (multinode-833343) Calling .DriverName
	I0927 01:06:32.257059   50694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:06:32.257080   50694 main.go:141] libmachine: (multinode-833343) Calling .GetSSHHostname
	I0927 01:06:32.259550   50694 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:06:32.259966   50694 main.go:141] libmachine: (multinode-833343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:02:23", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:03:44 +0000 UTC Type:0 Mac:52:54:00:d6:02:23 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-833343 Clientid:01:52:54:00:d6:02:23}
	I0927 01:06:32.260001   50694 main.go:141] libmachine: (multinode-833343) DBG | domain multinode-833343 has defined IP address 192.168.39.203 and MAC address 52:54:00:d6:02:23 in network mk-multinode-833343
	I0927 01:06:32.260060   50694 main.go:141] libmachine: (multinode-833343) Calling .GetSSHPort
	I0927 01:06:32.260208   50694 main.go:141] libmachine: (multinode-833343) Calling .GetSSHKeyPath
	I0927 01:06:32.260349   50694 main.go:141] libmachine: (multinode-833343) Calling .GetSSHUsername
	I0927 01:06:32.260469   50694 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343/id_rsa Username:docker}
	I0927 01:06:32.343967   50694 ssh_runner.go:195] Run: systemctl --version
	I0927 01:06:32.350057   50694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:06:32.364278   50694 kubeconfig.go:125] found "multinode-833343" server: "https://192.168.39.203:8443"
	I0927 01:06:32.364308   50694 api_server.go:166] Checking apiserver status ...
	I0927 01:06:32.364343   50694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:06:32.377754   50694 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1068/cgroup
	W0927 01:06:32.386935   50694 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1068/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:06:32.386980   50694 ssh_runner.go:195] Run: ls
	I0927 01:06:32.391271   50694 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0927 01:06:32.395286   50694 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0927 01:06:32.395322   50694 status.go:456] multinode-833343 apiserver status = Running (err=<nil>)
	I0927 01:06:32.395334   50694 status.go:176] multinode-833343 status: &{Name:multinode-833343 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:06:32.395361   50694 status.go:174] checking status of multinode-833343-m02 ...
	I0927 01:06:32.395646   50694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:06:32.395675   50694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:06:32.410675   50694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0927 01:06:32.411155   50694 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:06:32.411687   50694 main.go:141] libmachine: Using API Version  1
	I0927 01:06:32.411712   50694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:06:32.411995   50694 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:06:32.412166   50694 main.go:141] libmachine: (multinode-833343-m02) Calling .GetState
	I0927 01:06:32.413631   50694 status.go:364] multinode-833343-m02 host status = "Running" (err=<nil>)
	I0927 01:06:32.413646   50694 host.go:66] Checking if "multinode-833343-m02" exists ...
	I0927 01:06:32.413940   50694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:06:32.413973   50694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:06:32.428680   50694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I0927 01:06:32.429083   50694 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:06:32.429554   50694 main.go:141] libmachine: Using API Version  1
	I0927 01:06:32.429572   50694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:06:32.429869   50694 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:06:32.430026   50694 main.go:141] libmachine: (multinode-833343-m02) Calling .GetIP
	I0927 01:06:32.432968   50694 main.go:141] libmachine: (multinode-833343-m02) DBG | domain multinode-833343-m02 has defined MAC address 52:54:00:ea:32:c3 in network mk-multinode-833343
	I0927 01:06:32.433372   50694 main.go:141] libmachine: (multinode-833343-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:32:c3", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:04:47 +0000 UTC Type:0 Mac:52:54:00:ea:32:c3 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-833343-m02 Clientid:01:52:54:00:ea:32:c3}
	I0927 01:06:32.433411   50694 main.go:141] libmachine: (multinode-833343-m02) DBG | domain multinode-833343-m02 has defined IP address 192.168.39.123 and MAC address 52:54:00:ea:32:c3 in network mk-multinode-833343
	I0927 01:06:32.433505   50694 host.go:66] Checking if "multinode-833343-m02" exists ...
	I0927 01:06:32.433891   50694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:06:32.433930   50694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:06:32.449019   50694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0927 01:06:32.449430   50694 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:06:32.449903   50694 main.go:141] libmachine: Using API Version  1
	I0927 01:06:32.449926   50694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:06:32.450232   50694 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:06:32.450413   50694 main.go:141] libmachine: (multinode-833343-m02) Calling .DriverName
	I0927 01:06:32.450626   50694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:06:32.450645   50694 main.go:141] libmachine: (multinode-833343-m02) Calling .GetSSHHostname
	I0927 01:06:32.453709   50694 main.go:141] libmachine: (multinode-833343-m02) DBG | domain multinode-833343-m02 has defined MAC address 52:54:00:ea:32:c3 in network mk-multinode-833343
	I0927 01:06:32.454209   50694 main.go:141] libmachine: (multinode-833343-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:32:c3", ip: ""} in network mk-multinode-833343: {Iface:virbr1 ExpiryTime:2024-09-27 02:04:47 +0000 UTC Type:0 Mac:52:54:00:ea:32:c3 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-833343-m02 Clientid:01:52:54:00:ea:32:c3}
	I0927 01:06:32.454229   50694 main.go:141] libmachine: (multinode-833343-m02) DBG | domain multinode-833343-m02 has defined IP address 192.168.39.123 and MAC address 52:54:00:ea:32:c3 in network mk-multinode-833343
	I0927 01:06:32.454361   50694 main.go:141] libmachine: (multinode-833343-m02) Calling .GetSSHPort
	I0927 01:06:32.454485   50694 main.go:141] libmachine: (multinode-833343-m02) Calling .GetSSHKeyPath
	I0927 01:06:32.454601   50694 main.go:141] libmachine: (multinode-833343-m02) Calling .GetSSHUsername
	I0927 01:06:32.454713   50694 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14935/.minikube/machines/multinode-833343-m02/id_rsa Username:docker}
	I0927 01:06:32.534471   50694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:06:32.548950   50694 status.go:176] multinode-833343-m02 status: &{Name:multinode-833343-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:06:32.548978   50694 status.go:174] checking status of multinode-833343-m03 ...
	I0927 01:06:32.549257   50694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 01:06:32.549288   50694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:06:32.565213   50694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I0927 01:06:32.565716   50694 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:06:32.566174   50694 main.go:141] libmachine: Using API Version  1
	I0927 01:06:32.566204   50694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:06:32.566511   50694 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:06:32.566690   50694 main.go:141] libmachine: (multinode-833343-m03) Calling .GetState
	I0927 01:06:32.568263   50694 status.go:364] multinode-833343-m03 host status = "Stopped" (err=<nil>)
	I0927 01:06:32.568274   50694 status.go:377] host is not running, skipping remaining checks
	I0927 01:06:32.568279   50694 status.go:176] multinode-833343-m03 status: &{Name:multinode-833343-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-833343 node start m03 -v=7 --alsologtostderr: (38.911584132s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-833343 node delete m03: (1.763165383s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-833343 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0927 01:15:10.486507   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:01.244658   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-833343 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.781759765s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-833343 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-833343
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-833343-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-833343-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.891331ms)

                                                
                                                
-- stdout --
	* [multinode-833343-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-833343-m02' is duplicated with machine name 'multinode-833343-m02' in profile 'multinode-833343'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-833343-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-833343-m03 --driver=kvm2  --container-runtime=crio: (42.897169392s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-833343
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-833343: exit status 80 (199.967622ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-833343 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-833343-m03 already exists in multinode-833343-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-833343-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.18s)

                                                
                                    
x
+
TestScheduledStopUnix (118.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-890813 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-890813 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.666218269s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890813 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-890813 -n scheduled-stop-890813
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890813 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 01:24:15.449779   22138 retry.go:31] will retry after 146.1µs: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.450901   22138 retry.go:31] will retry after 211.097µs: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.452046   22138 retry.go:31] will retry after 183.059µs: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.453184   22138 retry.go:31] will retry after 294.04µs: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.454312   22138 retry.go:31] will retry after 280.676µs: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.455436   22138 retry.go:31] will retry after 538.378µs: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.456554   22138 retry.go:31] will retry after 1.625709ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.458765   22138 retry.go:31] will retry after 1.858351ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.460961   22138 retry.go:31] will retry after 3.040976ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.464109   22138 retry.go:31] will retry after 3.348313ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.468315   22138 retry.go:31] will retry after 3.470057ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.472512   22138 retry.go:31] will retry after 6.673443ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.479758   22138 retry.go:31] will retry after 16.578719ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.496995   22138 retry.go:31] will retry after 25.841743ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
I0927 01:24:15.523270   22138 retry.go:31] will retry after 23.497907ms: open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/scheduled-stop-890813/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890813 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890813 -n scheduled-stop-890813
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-890813
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890813 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0927 01:25:10.492524   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-890813
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-890813: exit status 7 (63.938699ms)

                                                
                                                
-- stdout --
	scheduled-stop-890813
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890813 -n scheduled-stop-890813
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890813 -n scheduled-stop-890813: exit status 7 (63.491822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-890813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-890813
--- PASS: TestScheduledStopUnix (118.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (115.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2025640322 start -p running-upgrade-596264 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2025640322 start -p running-upgrade-596264 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (48.055946495s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-596264 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0927 01:29:53.557888   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-596264 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.384903984s)
helpers_test.go:175: Cleaning up "running-upgrade-596264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-596264
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-596264: (1.141279674s)
--- PASS: TestRunningBinaryUpgrade (115.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (192.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.153977007 start -p stopped-upgrade-811219 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.153977007 start -p stopped-upgrade-811219 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.854788466s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.153977007 -p stopped-upgrade-811219 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.153977007 -p stopped-upgrade-811219 stop: (2.138547669s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-811219 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-811219 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.778402029s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (192.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-811219
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (59.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-213608 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-213608 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (59.650764287s)
--- PASS: TestPause/serial/Start (59.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-213608 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0927 01:30:10.486480   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-213608 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.082695824s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-719096 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-719096 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (59.597313ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-719096] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-719096 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-719096 --driver=kvm2  --container-runtime=crio: (48.154543844s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-719096 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.41s)

                                                
                                    
x
+
TestPause/serial/Pause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-213608 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-213608 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-213608 --output=json --layout=cluster: exit status 2 (276.364382ms)

                                                
                                                
-- stdout --
	{"Name":"pause-213608","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-213608","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-213608 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-213608 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-213608 --alsologtostderr -v=5: (1.017682475s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-213608 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-782846 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-782846 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (130.470429ms)

                                                
                                                
-- stdout --
	* [false-782846] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:30:57.153251   63358 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:30:57.153451   63358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:30:57.153477   63358 out.go:358] Setting ErrFile to fd 2...
	I0927 01:30:57.153492   63358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:30:57.153824   63358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14935/.minikube/bin
	I0927 01:30:57.154626   63358 out.go:352] Setting JSON to false
	I0927 01:30:57.155977   63358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8002,"bootTime":1727392655,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 01:30:57.156092   63358 start.go:139] virtualization: kvm guest
	I0927 01:30:57.158375   63358 out.go:177] * [false-782846] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 01:30:57.159668   63358 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:30:57.159703   63358 notify.go:220] Checking for updates...
	I0927 01:30:57.161877   63358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:30:57.163134   63358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14935/kubeconfig
	I0927 01:30:57.164251   63358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14935/.minikube
	I0927 01:30:57.165360   63358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 01:30:57.166490   63358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:30:57.168145   63358 config.go:182] Loaded profile config "NoKubernetes-719096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:30:57.168254   63358 config.go:182] Loaded profile config "cert-expiration-595331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:30:57.168362   63358 config.go:182] Loaded profile config "kubernetes-upgrade-637447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:30:57.168465   63358 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:30:57.218168   63358 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 01:30:57.219452   63358 start.go:297] selected driver: kvm2
	I0927 01:30:57.219473   63358 start.go:901] validating driver "kvm2" against <nil>
	I0927 01:30:57.219491   63358 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:30:57.221751   63358 out.go:201] 
	W0927 01:30:57.222991   63358 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0927 01:30:57.224054   63358 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-782846 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-782846" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:28:15 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.245:8443
name: cert-expiration-595331
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:30:52 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.182:8443
name: kubernetes-upgrade-637447
contexts:
- context:
cluster: cert-expiration-595331
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:28:15 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-595331
name: cert-expiration-595331
- context:
cluster: kubernetes-upgrade-637447
user: kubernetes-upgrade-637447
name: kubernetes-upgrade-637447
current-context: kubernetes-upgrade-637447
kind: Config
preferences: {}
users:
- name: cert-expiration-595331
user:
client-certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/cert-expiration-595331/client.crt
client-key: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/cert-expiration-595331/client.key
- name: kubernetes-upgrade-637447
user:
client-certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.crt
client-key: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-782846

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-782846"

                                                
                                                
----------------------- debugLogs end: false-782846 [took: 3.083325317s] --------------------------------
helpers_test.go:175: Cleaning up "false-782846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-782846
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-719096 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-719096 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.613252756s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-719096 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-719096 status -o json: exit status 2 (223.761455ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-719096","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-719096
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-719096: (1.152609582s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (92.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-521072 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-521072 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m32.018611037s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (92.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-719096 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-719096 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.288070285s)
--- PASS: TestNoKubernetes/serial/Start (45.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-719096 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-719096 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.801347ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.872712552s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-719096
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-719096: (1.286701994s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-719096 --driver=kvm2  --container-runtime=crio
E0927 01:33:01.245478   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-719096 --driver=kvm2  --container-runtime=crio: (21.780671322s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-521072 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8c6c402f-4b67-4a90-8eb7-324f03f53585] Pending
helpers_test.go:344: "busybox" [8c6c402f-4b67-4a90-8eb7-324f03f53585] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8c6c402f-4b67-4a90-8eb7-324f03f53585] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004043388s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-521072 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-719096 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-719096 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.202929ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-245911 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-245911 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m24.287874405s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-521072 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-521072 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-368295 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-368295 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m26.923995358s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-245911 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2730bf2b-6257-487a-9e03-970dea4904d3] Pending
helpers_test.go:344: "busybox" [2730bf2b-6257-487a-9e03-970dea4904d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2730bf2b-6257-487a-9e03-970dea4904d3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.00681008s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-245911 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-245911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-245911 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-368295 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1bd99a1f-ba18-4b70-a6fc-b1eef3dd16ca] Pending
helpers_test.go:344: "busybox" [1bd99a1f-ba18-4b70-a6fc-b1eef3dd16ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1bd99a1f-ba18-4b70-a6fc-b1eef3dd16ca] Running
E0927 01:35:10.487469   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.005695807s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-368295 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-368295 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-368295 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (650.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-521072 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-521072 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m50.007396866s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-521072 -n no-preload-521072
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (650.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-612261 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-612261 --alsologtostderr -v=3: (6.286511257s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (520.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-245911 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-245911 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m39.952760313s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-245911 -n embed-certs-245911
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (520.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-612261 -n old-k8s-version-612261: exit status 7 (64.344844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-612261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (542.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-368295 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0927 01:38:01.245129   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:39:24.315032   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:40:10.488289   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:43:01.245429   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:45:10.486671   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-368295 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m2.208560412s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-368295 -n default-k8s-diff-port-368295
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (542.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-223910 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-223910 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (47.786013486s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (53.04501706s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-223910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-223910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.37510873s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-223910 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-223910 --alsologtostderr -v=3: (10.63551508s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-223910 -n newest-cni-223910
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-223910 -n newest-cni-223910: exit status 7 (64.319803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-223910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (42.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-223910 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-223910 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (42.213331208s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-223910 -n newest-cni-223910
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (42.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-782846 "pgrep -a kubelet"
I0927 02:02:37.899997   22138 config.go:182] Loaded profile config "auto-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-782846 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5d9nz" [875b9036-05ba-43a4-8955-b75df5724716] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5d9nz" [875b9036-05ba-43a4-8955-b75df5724716] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004701194s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (16.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-782846 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-782846 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.183465947s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 02:03:05.351825   22138 retry.go:31] will retry after 806.283467ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context auto-782846 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (16.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-223910 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-223910 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-223910 -n newest-cni-223910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-223910 -n newest-cni-223910: exit status 2 (237.52717ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-223910 -n newest-cni-223910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-223910 -n newest-cni-223910: exit status 2 (240.951809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-223910 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-223910 -n newest-cni-223910
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-223910 -n newest-cni-223910
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0927 02:03:01.245007   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/addons-364775/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m6.05728672s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m21.281239186s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (101.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0927 02:03:31.580202   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m41.685730056s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (101.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (117.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0927 02:03:52.062217   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/no-preload-521072/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m57.711886198s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (117.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xz6xz" [2f54b7e5-b691-4328-8a57-e3789fceeb4a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004527301s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-782846 "pgrep -a kubelet"
I0927 02:04:06.504787   22138 config.go:182] Loaded profile config "kindnet-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-782846 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qph8h" [21061fe0-664f-4775-917f-dd16db925eba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qph8h" [21061fe0-664f-4775-917f-dd16db925eba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005128367s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-782846 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m20.54767734s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-t42z9" [93fd2101-68a1-4102-b7f4-212213e82fcb] Running
E0927 02:04:43.875184   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006531749s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-782846 "pgrep -a kubelet"
I0927 02:04:47.646067   22138 config.go:182] Loaded profile config "calico-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-782846 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-74ncp" [b0d4e99e-21c0-4b84-96ce-cd32dd949949] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-74ncp" [b0d4e99e-21c0-4b84-96ce-cd32dd949949] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004254069s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-782846 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-782846 "pgrep -a kubelet"
I0927 02:05:03.764354   22138 config.go:182] Loaded profile config "custom-flannel-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-782846 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t8tcz" [02248399-1d0b-44a0-9c7c-4a21bc0fd797] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 02:05:04.356875   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:04.869624   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:04.876047   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:04.887396   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:04.909437   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:04.950903   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:05.032859   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:05.194299   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:05.525275   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:06.167022   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:07.450787   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-t8tcz" [02248399-1d0b-44a0-9c7c-4a21bc0fd797] Running
E0927 02:05:10.012865   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:10.487429   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/functional-774677/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005104543s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-782846 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0927 02:05:25.376456   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-782846 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (59.386850427s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-782846 "pgrep -a kubelet"
I0927 02:05:39.046836   22138 config.go:182] Loaded profile config "enable-default-cni-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-782846 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6fdnq" [922c4ea4-168e-4202-aaa8-81ab73e7e403] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6fdnq" [922c4ea4-168e-4202-aaa8-81ab73e7e403] Running
E0927 02:05:45.318226   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/old-k8s-version-612261/client.crt: no such file or directory" logger="UnhandledError"
E0927 02:05:45.858580   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004725904s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-782846 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4mcrh" [f3943590-bd62-49ec-a767-aa197d5b1dd2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004033221s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-782846 "pgrep -a kubelet"
I0927 02:06:01.834858   22138 config.go:182] Loaded profile config "flannel-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-782846 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jwsk4" [54784317-8840-449b-9819-d5da79ffb4a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jwsk4" [54784317-8840-449b-9819-d5da79ffb4a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00458667s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-782846 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-782846 "pgrep -a kubelet"
I0927 02:06:17.454537   22138 config.go:182] Loaded profile config "bridge-782846": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-782846 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m628g" [d1be135e-a58d-43ea-afd1-7d77ec595c97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m628g" [d1be135e-a58d-43ea-afd1-7d77ec595c97] Running
E0927 02:06:26.820131   22138 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/default-k8s-diff-port-368295/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004236898s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-782846 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-782846 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (37/317)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.13
273 TestNetworkPlugins/group/kubenet 3.55
281 TestNetworkPlugins/group/cilium 3.28
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-630210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-630210
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-782846 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-782846" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:28:15 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.245:8443
name: cert-expiration-595331
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:30:52 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.182:8443
name: kubernetes-upgrade-637447
contexts:
- context:
cluster: cert-expiration-595331
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:28:15 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-595331
name: cert-expiration-595331
- context:
cluster: kubernetes-upgrade-637447
user: kubernetes-upgrade-637447
name: kubernetes-upgrade-637447
current-context: kubernetes-upgrade-637447
kind: Config
preferences: {}
users:
- name: cert-expiration-595331
user:
client-certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/cert-expiration-595331/client.crt
client-key: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/cert-expiration-595331/client.key
- name: kubernetes-upgrade-637447
user:
client-certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.crt
client-key: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-782846

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-782846"

                                                
                                                
----------------------- debugLogs end: kubenet-782846 [took: 3.387427903s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-782846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-782846
--- SKIP: TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-782846 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-782846" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:28:15 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.245:8443
name: cert-expiration-595331
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-14935/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:31:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.182:8443
name: kubernetes-upgrade-637447
contexts:
- context:
cluster: cert-expiration-595331
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:28:15 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-595331
name: cert-expiration-595331
- context:
cluster: kubernetes-upgrade-637447
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:31:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-637447
name: kubernetes-upgrade-637447
current-context: kubernetes-upgrade-637447
kind: Config
preferences: {}
users:
- name: cert-expiration-595331
user:
client-certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/cert-expiration-595331/client.crt
client-key: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/cert-expiration-595331/client.key
- name: kubernetes-upgrade-637447
user:
client-certificate: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.crt
client-key: /home/jenkins/minikube-integration/19711-14935/.minikube/profiles/kubernetes-upgrade-637447/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-782846

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-782846" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-782846"

                                                
                                                
----------------------- debugLogs end: cilium-782846 [took: 3.143423913s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-782846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-782846
--- SKIP: TestNetworkPlugins/group/cilium (3.28s)

                                                
                                    
Copied to clipboard